0% found this document useful (0 votes)
95 views

Cloud Assignment

The document discusses several topics related to cloud computing: 1. It provides Salesforce and Zoom as examples of SaaS providers, describing Salesforce's CRM features and how Zoom became essential during the pandemic for virtual meetings and classes. 2. It outlines seven fundamental principles of cloud security design, including implementing strong identity, enabling traceability, and automating security best practices. 3. It describes how cloud technologies can support remote health monitoring through cloud storage of medical records, streamlining care, and enabling big data applications and medical device connectivity.

Uploaded by

software project
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views

Cloud Assignment

The document discusses several topics related to cloud computing: 1. It provides Salesforce and Zoom as examples of SaaS providers, describing Salesforce's CRM features and how Zoom became essential during the pandemic for virtual meetings and classes. 2. It outlines seven fundamental principles of cloud security design, including implementing strong identity, enabling traceability, and automating security best practices. 3. It describes how cloud technologies can support remote health monitoring through cloud storage of medical records, streamlining care, and enabling big data applications and medical device connectivity.

Uploaded by

software project
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Assignment1

1. State any two service provider of SaaS and explain anyone of the two in detail.
Ans:-
SaaS stands for software as a Service. SaaS is also known as "On-Demand Software".
It is a software distribution model in which services are hosted by a cloud service
provider. These services are available to end-users over the internet so, the end-
users do not need to install any software on their devices to access these services.
Example:-
 Salesforce
 Zoom

Salesforce: Salesforce is a leading SaaS tool specializing in customer relationship


management (CRM). It’s built to help businesses connect with current and
prospective customers, close more deals, and deliver top customer service.
Key features
 Process automation
 Account and contact management
 Lead management
 Reports and dashboards
 Pipeline and forecast management

Zoom: The pandemic made us adopt virtual meeting platforms and the world
welcomed Zoom, a video conferencing platform, with open arms. Be it educational
classes, professional meetings or personal meetings, Zoom has become an essential
part of our lives.

Backed by a robust cloud platform, Zoom allows users to have virtual meetings,
conferences, webinars and events. It has many simple-to-use features such as screen
sharing, live chats, admin control, etc. The pandemic brought a lot of video
communications companies to the surface but Zoom’s seamless performance and
easy-to-use interface.

USP: All-in-one video communications platform with top-of-the line features and
super-easy interface.

2. What are the fundamental principles of cloud security design?

 Implement a strong identity foundation: Implement the principle of least


privilege and enforce separation of duties with appropriate authorization for
each interaction with your AWS resources. Centralize identity management and
aim to eliminate reliance on long-term static credentials.
 Enable traceability: Monitor, alert, and audit actions and changes to your
environment in real time. Integrate log and metric collection with systems to
automatically investigate and take action.
 Apply security at all layers: Apply a defense in depth approach with multiple
security controls. Apply to all layers (for example, edge of network, VPC, load
balancing, every instance and compute service, operating system, application,
and code).
 Automate security best practices: Automated software-based security
mechanisms improve your ability to securely scale more rapidly and cost-
effectively. Create secure architectures, including the implementation of controls
that are defined and managed as code in version-controlled templates.
 Protect data in transit and at rest: Classify your data into sensitivity levels and
use mechanisms, such as encryption, tokenization, and access control where
appropriate.
 Keep people away from data: Use mechanisms and tools to reduce or eliminate
the need for direct access or manual processing of data. This reduces the risk of
mishandling or modification and human error when handling sensitive data.
 Prepare for security events: Prepare for an incident by having incident
management and investigation policy and processes that align to your
organizational requirements. Run incident response simulations and use tools
with automation to increase your speed for detection, investigation, and
recovery.
The above mentioned 7 principles should be applied to all 6 areas of security in
the cloud:
 Foundations
 Identity and Access Management
 Detection
 Infrastructure protection
 Data Protection
 Incident Response

3. Describe how cloud computing technologies can be applied to support remote


health monitoring?
 Cloud computing in the healthcare industry describes the practice of
implementing remote servers accessed via the internet to store, manage and
process healthcare-related data. This is in contrast to establishing an on-site data
center with servers, or hosting the data on a personal computer.
 Cloud storage offers a flexible solution that allows healthcare professionals and
hospitals to leverage a network of remotely accessible servers where they can
store large volumes of data in a secure environment that is maintained by IT
professionals.
 Since the introduction of the Electronic Medical Records (EMR) Mandate,
healthcare organizations across the United States have adopted cloud-based
healthcare solutions as a means of storing and protecting patient records.
 According to BCC research, the global healthcare cloud computing market is
expected to hit $35 billion by 2022, with an annualized growth rate of 11.6%.
 Despite that, 69% of respondents in a 2018 survey indicated that the hospital
they worked at did not have a plan for moving existing data centers to the cloud.

Importance:-
 Efficient electronic medical record keeping
 Streamline collaborative patient care
 Reduced data storage costs
 Superior data security
 A way for big data application
 Flexible and scale easily
 Drives medical research
 Drives data interoperability
 Cloud connected medical Devices

4. Discuss the use of hypervisor in cloud computing.


A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a
piece of software that allows us to build and run virtual machines which are
abbreviated as VMs.
A hypervisor allows a single host computer to support multiple virtual machines
(VMs) by sharing resources including memory and processing.
Use:-
Hypervisors allow the use of more of a system's available resources and provide
greater IT versatility because the guest VMs are independent of the host hardware
which is one of the major benefits of the Hypervisor.
In other words, this implies that they can be quickly switched between servers. Since
a hypervisor with the help of its special feature, it allows several virtual machines to
operate on a single physical server. So, it helps us to reduce:
 The Space efficiency
 The Energy uses
 The Maintenance requirements of the server.

5. Explain the cloud resource pooling.


A resource pool is a group of resources that can be assigned to users. Resources of
any kind, including computation, network, and storage, can be pooled. It adds an
abstraction layer that enables uniform resource use and presentation. In cloud data
centers, a sizable pool of physical resources is maintained and made available to
consumers as virtual services.
Any resource from this pool may be given to one user or application only, or it may
even be shared by several users or apps. Additionally, resources are dynamically
provided according to need rather than being permanently allocated to users. As
load or demand fluctuates over time, this results in efficient resource usage.

6. Outline elasticity in cloud. Mention what is the difference between elasticity and
scalability in cloud computing?
Cloud Elasticity: Elasticity refers to the ability of a cloud to automatically expand or
compress the infrastructural resources on a sudden up and down in the requirement
so that the workload can be managed efficiently. This elasticity helps to minimize
infrastructural costs. This is not applicable for all kinds of environments, it is helpful
to address only those scenarios where the resource requirements fluctuate up and
down suddenly for a specific time interval. It is not quite practical to use where
persistent resource infrastructure is required to handle the heavy workload.
The versatility is vital for mission basic or business basic applications where any split
the difference in the exhibition may prompts enormous business misfortune. Thus,
flexibility comes into picture where extra assets are provisioned for such application
to meet the presentation prerequisites.
It works such a way that when number of client access expands, applications are
naturally provisioned the extra figuring, stockpiling and organization assets like
central processor, Memory, Stockpiling or transfer speed what’s more, when fewer
clients are there it will naturally diminish those as
per prerequisite.

S.No. Cloud Elasticity Cloud Scalability


1 It is used just to fulfil the sudden It is used to fulfil the static boost
requirement in the workload for a short in the workload.
period.
2 It is preferred to satisfy the dynamic It is preferred to handle growth in
modifications, where the required resources the workload in an organisation.
can improve or reduce.
3 Cloud elasticity is generally used by small Cloud scalability is utilised by big
enterprises whose workload expands only enterprises.
for a specific period.
4 It is a short-term event that is used to deal It is a long-term event that is used
with an unplanned or sudden growth in to deal with an expected growth in
demand. demand.

7. What are the innovative characteristics of cloud computing?

 On-demand self-services: The Cloud computing services does not require any
human administrators, user themselves are able to provision, monitor and
manage computing resources as needed.
 Broad network access: The Computing services are generally provided over
standard networks and heterogeneous devices.
 Rapid elasticity: The Computing services should have IT resources that are able
to scale out and in quickly and on as needed basis. Whenever the user require
services it is provided to him and it is scale out as soon as its requirement gets
over.
 Resource pooling: The IT resource (e.g., networks, servers, storage, applications,
and services) present are shared across multiple applications and occupant in an
uncommitted manner. Multiple clients are provided service from a same physical
resource.
 Measured service: The resource utilization is tracked for each application and
occupant, it will provide both the user and the resource provider with an account
of what has been used. This is done for various reasons like monitoring billing
and effective use of resource.
 Multi-tenancy: Cloud computing providers can support multiple tenants (users
or organizations) on a single set of shared resources.
 Virtualization: Cloud computing providers use virtualization technology to
abstract underlying hardware resources and present them as logical resources to
users.
 Resilient computing: Cloud computing services are typically designed with
redundancy and fault tolerance in mind, which ensures high availability and
reliability.
 Flexible pricing models: Cloud providers offer a variety of pricing models,
including pay-per-use, subscription-based, and spot pricing, allowing users to
choose the option that best suits their needs.
 Security: Cloud providers invest heavily in security measures to protect their
users’ data and ensure the privacy of sensitive information.
 Automation: Cloud computing services are often highly automated, allowing
users to deploy and manage resources with minimal manual intervention.
 Sustainability: Cloud providers are increasingly focused on sustainable practices,
such as energy-efficient data centers and the use of renewable energy sources,
to reduce their environmental impact.

8. State the limitations of virtualization along with its strong usibilities.


Advantages:-
 Cheap: IT infrastructures find virtualization to be a more affordable
implementation option because it doesn't require the use or installation of
actual hardware components
 Efficient: By downloading the new versions of the software and hardware
from a third-party supplier, efficient virtualization also enables automatic
upgrades of both.
 Disaster recovery: When servers are virtualized, disaster recovery is relatively
simple thanks to fast backup restoration and current snapshots of your
virtual machines.
 Deployment: Resources may be deployed much more quickly when
employing virtualization technology. It is feasible to significantly reduce the
amount of time required for setting up physical devices or creating local
networks.
 Saves Energy: Both individuals and businesses can save energy by using
virtualization. The rate of energy consumption can be reduced because no
local hardware or software alternatives are being employed.
 Improved uptime: Virtualization technologies have increased uptime
dramatically. An uptime of 99.9999% is offered by some providers. Even low-
cost carriers now offer uptime at a rate of 99.99%.
 Consistent cost: People and corporations can have predictable expenses for
their IT requirements because third-party vendors frequently offer choices
for virtualization.
Disadvantages:-

 Exorbitant costs of implementation: In a virtualization environment, the


suppliers, however, may incur very significant implementation expenses. It
follows that devices must either be created, made, or purchased for
implementation when hardware and software are eventually required.
 Restraints: Virtualization cannot be used with every server and application
currently in existence. Therefore, certain firms' IT infrastructures would not
be able to support the virtualized solutions. They no longer receive support
from a number of vendors as well. The demands of both individuals and
organisations must be served using a hybrid approach.
 Problems with availability: Long-term data linking is required. If not, the
business would become less competitive in the market.
 Time-intensive: In comparison to local systems, virtualization takes less time
to implement, but it ultimately costs users time. This is due to the fact that
there are additional procedures that need to be completed in order to attain
the desired result.
 Threats to security: Information is our current currency. Having money
allows you to make money. Without it, people will forget about you. The
success of a corporation depends on information, hence it is frequently
targeted.
 Problems with scalability: In a virtualization network, growth generates
latency since multiple firms share the same resources. There wouldn't be
much that could be done to stop it, but one powerful presence could syphon
money away from other, smaller businesses.
 A Number of links must interact: With virtualization, people lose control
because numerous ties are required to cooperate in order to complete the
same task.
9. What are the disadvantages of virtualization?

 Exorbitant costs of implementation: In a virtualization environment, the


suppliers, however, may incur very significant implementation expenses. It
follows that devices must either be created, made, or purchased for
implementation when hardware and software are eventually required.
 Restraints: Virtualization cannot be used with every server and application
currently in existence. Therefore, certain firms' IT infrastructures would not
be able to support the virtualized solutions. They no longer receive support
from a number of vendors as well. The demands of both individuals and
organisations must be served using a hybrid approach.
 Problems with availability: Long-term data linking is required. If not, the
business would become less competitive in the market.
 Time-intensive: In comparison to local systems, virtualization takes less time
to implement, but it ultimately costs users time. This is due to the fact that
there are additional procedures that need to be completed in order to attain
the desired result.
 Threats to security: Information is our current currency. Having money
allows you to make money. Without it, people will forget about you. The
success of a corporation depends on information, hence it is frequently
targeted.
 Problems with scalability: In a virtualization network, growth generates
latency since multiple firms share the same resources. There wouldn't be
much that could be done to stop it, but one powerful presence could syphon
money away from other, smaller businesses.
 A Number of links must interact: With virtualization, people lose control
because numerous ties are required to cooperate in order to complete the
same task.

10. Categories different types of clouds.

 Public Cloud: Public cloud is open to all to store and access information via
the Internet using the pay-per-usage method.
In public cloud, computing resources are managed and operated by the Cloud
Service Provider (CSP).
Example: Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise,
Microsoft, Google App Engine, Windows Azure Services Platform.
 Private Cloud: Private cloud is also known as an internal cloud or corporate
cloud. It is used by organizations to build and manage their own data centers
internally or by the third party. It can be deployed using Opensource tools
such as Openstack and Eucalyptus.
Based on the location and management, National Institute of Standards and
Technology (NIST) divide private cloud into the following two parts-
o On-premise private cloud
o Outsourced private cloud
 Hybrid cloud: Hybrid Cloud is a combination of the public cloud and the
private cloud. Hybrid cloud is partially secure because the services which are
running on the public cloud can be accessed by anyone, while the services
which are running on a private cloud can be accessed only by the
organization's users.
Example: Google Application Suite (Gmail, Google Apps, and Google Drive),
Office 365 (MS Office on the Web and One Drive), Amazon Web Services.
 Community Cloud: Community cloud allows systems and services to be
accessible by a group of several organizations to share the information
between the organization and a specific community. It is owned, managed,
and operated by one or more organizations in the community, a third party,
or a combination of them.
Example: Health Care community cloud
Assignment 2
1. Explain Application Development, Infrastructure and system development helpful in
building Cloud Computing environments.

Cloud computing environment Development of cloud computing application


happens by leveraging platforms & frameworks that provide different types of
services, from the bare metal infrastructure to customizable applications servicing
specific purposes. Various such platforms or technologies (PaaS) that are available
for the users to build and host an application are: Amazon Web Services: AWS offers
virtual compute, storage, networking & complete computing stacks. It is known for
its on-demand services namely Elastic Compute Cloud (EC2) and Simple Storage
Service (S3). Google AppEngine: Launched in 2008, it provides an alternative to fixed
applications (SaaS) and raw hardware (IaaS). App Engine managed infrastructure,
provides a development platform to create apps, leveraging Google's infrastructure
as a hosting platform. Microsoft Azure: It is also a scalable runtime environment for
web & distributed applications. However, it provides additional services such as
support for storage (relational data & blobs), networking, caching, content delivery
& others.

2. List and explain any 4 Cloud computing platforms and technologies.

 Amazon Web Services (AWS) –


AWS provides different wide-ranging clouds IaaS services, which ranges from
virtual compute, storage, and networking to complete computing stacks. AWS is
well known for its storage and compute on demand services, named as Elastic
Compute Cloud (EC2) and Simple Storage Service (S3). EC2 offers customizable
virtual hardware to the end user which can be utilize as the base infrastructure
for deploying computing systems on the cloud. It is likely to choose from a large
variety of virtual hardware configurations including GPU and cluster instances.
Either the AWS console, which is a wide-ranged Web portal for retrieving AWS
services, or the web services API available for several programming language is
used to deploy the EC2 instances. EC2 also offers the capability of saving an
explicit running instance as image, thus allowing users to create their own
templates for deploying system. S3 stores these templates and delivers
persistent storage on demand. S3 is well ordered into buckets which contains
objects that are stored in binary form and can be grow with attributes. End users
can store objects of any size, from basic file to full disk images and have them
retrieval from anywhere. In addition, EC2 and S3, a wide range of services can be
leveraged to build virtual computing system including: networking support,
caching system, DNS, database support, and others.
 Google AppEngine –
Google AppEngine is a scalable runtime environment frequently dedicated to
executing web applications. These utilize benefits of the large computing
infrastructure of Google to dynamically scale as per the demand. AppEngine
offers both a secure execution environment and a collection of which simplifies
the development if scalable and high-performance Web applications. These
services include: in-memory caching, scalable data store, job queues, messaging,
and corn tasks. Developers and Engineers can build and test applications on their
own systems by using the AppEngine SDK, which replicates the production
runtime environment, and helps test and profile applications. On completion of
development, Developers can easily move their applications to AppEngine, set
quotas to containing the cost generated, and make it available to the world.
Currently, the supported programming languages are Python, Java, and Go.
 Microsoft Azure –
Microsoft Azure is a Cloud operating system and a platform in which user can
develop the applications in the cloud. Generally, a scalable runtime environment
for web applications and distributed applications is provided. Application in
Azure are organized around the fact of roles, which identify a distribution unit for
applications and express the application’s logic. Azure provides a set of additional
services that complement application execution such as support for storage,
networking, caching, content delivery, and others.
 Force.com and Salesforce.com –
Force.com is a Cloud computing platform at which user can develop social
enterprise applications. The platform is the basis of SalesForce.com – a Software-
as-a-Service solution for customer relationship management. Force.com allows
creating applications by composing ready-to-use blocks: a complete set of
components supporting all the activities of an enterprise are available. From the
design of the data layout to the definition of business rules and user interface is
provided by Force.com as a support. This platform is completely hostel in the
Cloud, and provides complete access to its functionalities, and those
implemented in the hosted applications through Web services technologies.

3. What are the characteristics of cloud architecture that separates it from traditional
one?

 In cloud architecture, the server hardware is provided and maintenance to it is


done by the service provider.
 Users can draw the services they require over the internet eliminating the need
to purchase any new hardware.
 Users pay for the services they use. It does away the need to pay any fixed
monthly plan fee as in traditional hosting. It also ensures users do not have to
buy resources they do not require and leave them un-utilized.
 Cloud architecture is scalable on demand. Users can increase or decrease their
resources depending on their business needs with just a few clicks without the
need of any physical effort as in traditional hosting.
 Cloud hosting is capable of handling workloads seamlessly without any possibility
of failure. Since it functions as a network, even if there is a failure in one of the
components, the services are available from the other active components.
 Cloud offers better data security and recovery from any natural disasters and
human errors as it backs up data over multiple locations.
 According to the need, cloud architecture fulfills the hardware demands.
 Cloud architecture can be easily scaled to the resource as per the demands.
 Cloud architecture is able to handle and manage dynamic workloads without any
failure.
4. List some of the challenges in cloud computing?

 Security
 Password Security
 Cost Management
 Lack of expertise
 Internet Connectivity
 Control or Governance
 Compliance
 Multiple Cloud Management
 Creating a private cloud
 Performance
 Migration
 Interoperability and Portability
 Reliability and High Availability
 Hybrid-Cloud Complexity

5. Create and justify Cloud architecture application design with neat sketch

1. Frontend :
Frontend of the cloud architecture refers to the client side of cloud computing
system. Means it contains all the user interfaces and applications which are used by
the client to access the cloud computing services/resources. For example, use of a
web browser to access the cloud platform.
Client Infrastructure – Client Infrastructure is a part of the frontend component. It
contains the applications and user interfaces which are required to access the cloud
platform.
In other words, it provides a GUI( Graphical User Interface ) to interact with the
cloud.
2. Backend :
Backend refers to the cloud itself which is used by the service provider. It contains
the resources as well as manages the resources and provides security mechanisms.
Along with this, it includes huge storage, virtual applications, virtual machines, traffic
control mechanisms, deployment models, etc.

6. Discuss about various dimensions of scalability and performance laws in distributed


system.
The Scalable System in Distributed System refers to the system in which there is a
possibility of extending the system as the number of users and resources grows with
time.
 The system should be enough capable to handle the load that the system and
application software need not change when the scale of the system
increases.
 To exemplify, with the increasing number of users and workstations the
frequency of file access is likely to increase in a distributed system. So, there
must be some possibility to add more servers to avoid any issue in file
accessing handling.
 Scalability is generally considered concerning hardware and software. In
hardware, scalability refers to the ability to change workloads by altering
hardware resources such as processors, memory, and hard disc space.
Software scalability refers to the capacity to adapt to changing workloads by
altering the scheduling mechanism and parallelism level.
Measures of Scalability:
 Size Scalability
 Geographical Scalability
 Administrative Scalability
Types of Scalability:
 Horizontal Scalability: Horizontal Scalability implies the addition of new
servers to the existing set of resources in the system. The major benefit lies
in the scaling of the system dynamically. For example, Cassandra and
MongoDB. Horizontal scaling is done in them by adding more machines.
Furthermore, the load balancer is employed for distributing the load on the
available servers which increases overall performance.
 Vertical Scalability: Vertical Scalability refers to the addition of more power
to the existing pool of resources like servers. For example, MySQL. Here,
scaling is carried out by switching from smaller to bigger machines.

7. What are the characteristics of cloud architecture that separates it from traditional
one?

 In cloud architecture, the server hardware is provided and maintenance to it is


done by the service provider.
 Users can draw the services they require over the internet eliminating the need
to purchase any new hardware.
 Users pay for the services they use. It does away the need to pay any fixed
monthly plan fee as in traditional hosting. It also ensures users do not have to
buy resources they do not require and leave them un-utilized.
 Cloud architecture is scalable on demand. Users can increase or decrease their
resources depending on their business needs with just a few clicks without the
need of any physical effort as in traditional hosting.
 Cloud hosting is capable of handling workloads seamlessly without any possibility
of failure. Since it functions as a network, even if there is a failure in one of the
components, the services are available from the other active components.
 Cloud offers better data security and recovery from any natural disasters and
human errors as it backs up data over multiple locations.
 According to the need, cloud architecture fulfills the hardware demands.
 Cloud architecture can be easily scaled to the resource as per the demands.
 Cloud architecture is able to handle and manage dynamic workloads without any
failure.

8. Interpret the cloud resource pooling.


A resource pool is a group of resources that can be assigned to users. Resources of
any kind, including computation, network, and storage, can be pooled. It adds an
abstraction layer that enables uniform resource use and presentation. In cloud data
centers, a sizable pool of physical resources is maintained and made available to
consumers as virtual services.
Any resource from this pool may be given to one user or application only, or it may
even be shared by several users or apps. Additionally, resources are dynamically
provided according to need rather than being permanently allocated to users. As
load or demand fluctuates over time, this results in efficient resource usage.
Slot 1
1. What are the fundamental principles of cloud security design?

 Implement a strong identity foundation: Implement the principle of least


privilege and enforce separation of duties with appropriate authorization for
each interaction with your AWS resources. Centralize identity management and
aim to eliminate reliance on long-term static credentials.
 Enable traceability: Monitor, alert, and audit actions and changes to your
environment in real time. Integrate log and metric collection with systems to
automatically investigate and take action.
 Apply security at all layers: Apply a defense in depth approach with multiple
security controls. Apply to all layers (for example, edge of network, VPC, load
balancing, every instance and compute service, operating system, application,
and code).
 Automate security best practices: Automated software-based security
mechanisms improve your ability to securely scale more rapidly and cost-
effectively. Create secure architectures, including the implementation of controls
that are defined and managed as code in version-controlled templates.
 Protect data in transit and at rest: Classify your data into sensitivity levels and
use mechanisms, such as encryption, tokenization, and access control where
appropriate.
 Keep people away from data: Use mechanisms and tools to reduce or eliminate
the need for direct access or manual processing of data. This reduces the risk of
mishandling or modification and human error when handling sensitive data.
 Prepare for security events: Prepare for an incident by having incident
management and investigation policy and processes that align to your
organizational requirements. Run incident response simulations and use tools
with automation to increase your speed for detection, investigation, and
recovery.
The above mentioned 7 principles should be applied to all 6 areas of security in
the cloud:
 Foundations
 Identity and Access Management
 Detection
 Infrastructure protection
 Data Protection
 Incident Response

2. What are the fundamental principles of cloud security design?

 Implement a strong identity foundation: Implement the principle of least


privilege and enforce separation of duties with appropriate authorization for
each interaction with your AWS resources. Centralize identity management and
aim to eliminate reliance on long-term static credentials.
 Enable traceability: Monitor, alert, and audit actions and changes to your
environment in real time. Integrate log and metric collection with systems to
automatically investigate and take action.
 Apply security at all layers: Apply a defense in depth approach with multiple
security controls. Apply to all layers (for example, edge of network, VPC, load
balancing, every instance and compute service, operating system, application,
and code).
 Automate security best practices: Automated software-based security
mechanisms improve your ability to securely scale more rapidly and cost-
effectively. Create secure architectures, including the implementation of controls
that are defined and managed as code in version-controlled templates.
 Protect data in transit and at rest: Classify your data into sensitivity levels and
use mechanisms, such as encryption, tokenization, and access control where
appropriate.
 Keep people away from data: Use mechanisms and tools to reduce or eliminate
the need for direct access or manual processing of data. This reduces the risk of
mishandling or modification and human error when handling sensitive data.
 Prepare for security events: Prepare for an incident by having incident
management and investigation policy and processes that align to your
organizational requirements. Run incident response simulations and use tools
with automation to increase your speed for detection, investigation, and
recovery.
The above mentioned 7 principles should be applied to all 6 areas of security in
the cloud:
 Foundations
 Identity and Access Management
 Detection
 Infrastructure protection
 Data Protection
 Incident Response

3. What are the innovative characteristics of cloud computing?

 On-demand self-services: The Cloud computing services does not require any
human administrators, user themselves are able to provision, monitor and
manage computing resources as needed.
 Broad network access: The Computing services are generally provided over
standard networks and heterogeneous devices.
 Rapid elasticity: The Computing services should have IT resources that are able
to scale out and in quickly and on as needed basis. Whenever the user require
services it is provided to him and it is scale out as soon as its requirement gets
over.
 Resource pooling: The IT resource (e.g., networks, servers, storage, applications,
and services) present are shared across multiple applications and occupant in an
uncommitted manner. Multiple clients are provided service from a same physical
resource.
 Measured service: The resource utilization is tracked for each application and
occupant, it will provide both the user and the resource provider with an account
of what has been used. This is done for various reasons like monitoring billing
and effective use of resource.
 Multi-tenancy: Cloud computing providers can support multiple tenants (users
or organizations) on a single set of shared resources.
 Virtualization: Cloud computing providers use virtualization technology to
abstract underlying hardware resources and present them as logical resources to
users.
 Resilient computing: Cloud computing services are typically designed with
redundancy and fault tolerance in mind, which ensures high availability and
reliability.
 Flexible pricing models: Cloud providers offer a variety of pricing models,
including pay-per-use, subscription-based, and spot pricing, allowing users to
choose the option that best suits their needs.
 Security: Cloud providers invest heavily in security measures to protect their
users’ data and ensure the privacy of sensitive information.
 Automation: Cloud computing services are often highly automated, allowing
users to deploy and manage resources with minimal manual intervention.
 Sustainability: Cloud providers are increasingly focused on sustainable practices,
such as energy-efficient data centers and the use of renewable energy sources,
to reduce their environmental impact.

4. Discuss the application of high performance and high throughput system


High Performance Computing (HPC) is a term used to describe the use of
supercomputers and parallel processing strategies to carry out difficult calculations
and data analysis activities. From scientific research to engineering and industrial
design, HPC is employed in a wide range of disciplines and applications. Here are a
few of the most significant HPC use cases and applications:
 Scientific research: HPC is widely utilized in this sector, especially in areas like
physics, chemistry, and astronomy. With standard computer techniques, it
would be hard to model complex physical events, examine massive data sets,
or carry out sophisticated calculations.
 Weather forecasting: The task of forecasting the weather is difficult and
data-intensive, requiring sophisticated algorithms and a lot of computational
power. Simulated weather models are executed on HPC computers to predict
weather patterns.
 Healthcare: HPC is being used more and more in the medical field for
activities like medication discovery, genome sequencing, and image analysis.
Large volumes of medical data can be processed by HPC systems rapidly and
accurately, improving patient diagnosis and care.
 Energy and environmental studies: HPC is employed to simulate and model
complex systems, such as climate change and renewable energy sources, in
the energy and environmental sciences. Researchers can use HPC systems to
streamline energy systems, cut carbon emissions, and increase the resilience
of our energy infrastructure.
 Engineering and Design: HPC is used in engineering and design to model and
evaluate complex systems, like those found in vehicles, buildings, and
aeroplanes. Virtual simulations performed by HPC systems can assist
engineers in identifying potential problems and improving designs before
they are built.

5. Outline applications of cloud computing with schematic.

6.
1. Frontend :
Frontend of the cloud architecture refers to the client side of cloud computing
system. Means it contains all the user interfaces and applications which are used by
the client to access the cloud computing services/resources. For example, use of a
web browser to access the cloud platform.
Client Infrastructure – Client Infrastructure is a part of the frontend component. It
contains the applications and user interfaces which are required to access the cloud
platform.
In other words, it provides a GUI( Graphical User Interface ) to interact with the
cloud.
2. Backend :
Backend refers to the cloud itself which is used by the service provider. It contains
the resources as well as manages the resources and provides security mechanisms.
Along with this, it includes huge storage, virtual applications, virtual machines, traffic
control mechanisms, deployment models, etc.
7. Outline elasticity in cloud. Mention what is the difference between elasticity and
scalability in cloud computing?
Cloud Elasticity: Elasticity refers to the ability of a cloud to automatically expand or
compress the infrastructural resources on a sudden up and down in the requirement
so that the workload can be managed efficiently. This elasticity helps to minimize
infrastructural costs. This is not applicable for all kinds of environments, it is helpful
to address only those scenarios where the resource requirements fluctuate up and
down suddenly for a specific time interval. It is not quite practical to use where
persistent resource infrastructure is required to handle the heavy workload.
The versatility is vital for mission basic or business basic applications where any split
the difference in the exhibition may prompts enormous business misfortune. Thus,
flexibility comes into picture where extra assets are provisioned for such application
to meet the presentation prerequisites.
It works such a way that when number of client access expands, applications are
naturally provisioned the extra figuring, stockpiling and organization assets like
central processor, Memory, Stockpiling or transfer speed what’s more, when fewer
clients are there it will naturally diminish those as
per prerequisite.

S.No. Cloud Elasticity Cloud Scalability


1 It is used just to fulfil the sudden It is used to fulfil the static boost
requirement in the workload for a short in the workload.
period.
2 It is preferred to satisfy the dynamic It is preferred to handle growth in
modifications, where the required resources the workload in an organisation.
can improve or reduce.
3 Cloud elasticity is generally used by small Cloud scalability is utilised by big
enterprises whose workload expands only enterprises.
for a specific period.
4 It is a short-term event that is used to deal It is a long-term event that is used
with an unplanned or sudden growth in to deal with an expected growth in
demand. demand.
Slot 2
1. What is storage networking?
Storage networking is the practice of linking together storage devices and connecting
them to other IT networks. Storage networks provide a centralized repository for
digital data that can be accessed by many users, and they use high-speed
connections to provide fast performance. It’s most common to find storage networks
in enterprise settings, although some vendors do sell networked storage products
for consumers and small businesses.
The phrase “storage networking” is commonly used in reference to storage area
networks (SANs). A SAN links together multiple storage devices and provides block-
level storage that can be accessed by servers.

2. List and Explain the 3 major milestones of distributed systems that led to the
evolution of Cloud Computing.

 Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and
reliable computing machines. These are responsible for handling large data such
as massive input-output operations. Even today these are used for bulk
processing tasks such as online transactions etc. These systems have almost no
downtime with high fault tolerance. After distributed computing, these increased
the processing capabilities of the system. But these were very expensive. To
reduce this cost, cluster computing came as an alternative to mainframe
technology.

 Cluster computing:
In 1980s, cluster computing came as an alternative to mainframe computing.
Each machine in the cluster was connected to each other by a network with high
bandwidth. These were way cheaper than those mainframe systems. These were
equally capable of high computations. Also, new nodes could easily be added to
the cluster if it was required. Thus, the problem of the cost was solved to some
extent but the problem related to geographical restrictions still pertained. To
solve this, the concept of grid computing was introduced.

 Grid computing:
In 1990s, the concept of grid computing was introduced. It means that different
systems were placed at entirely different geographical locations and these all
were connected via the internet. These systems belonged to different
organizations and thus the grid consisted of heterogeneous nodes. Although it
solved some problems but new problems emerged as the distance between the
nodes increased. The main problem which was encountered was the low
availability of high bandwidth connectivity and with it other network associated
issues. Thus. cloud computing is often referred to as “Successor of grid
computing”.

3. State any three essential characteristics of cloud computing in details.


 On-Demand Self-Service: With cloud computing, you can provision computing
services, like server time and network storage, automatically. You won’t need to
interact with the service provider. Cloud customers can access their cloud accounts
through a web self-service portal to view their cloud services, monitor their usage,
and provision and de-provision services.
 Resource Pooling: With resource pooling, multiple customers can share physical
resources using a multi-tenant model. This model assigns and reassigns physical and
virtual resources based on demand. Multi-tenancy allows customers to share the
same applications or infrastructure while maintaining privacy and security. Though
customers won't know the exact location of their resources, they may be able to
specify the location at a higher level of abstraction, such as a country, state, or data
center. Memory, processing, and bandwidth are among the resources that
customers can pool.
 Rapid Elasticity: Cloud services can be elastically provisioned and released,
sometimes automatically, so customers can scale quickly based on demand. The
capabilities available for provisioning are practically unlimited. Customers can
engage with these capabilities at any time in any quantity. Customers can also scale
cloud use, capacity, and cost without extra contracts or fees. With rapid elasticity,
you won’t need to buy computer hardware. Instead, can use the cloud provider's
cloud computing resources.
 Flexibility: Companies need to scale as their business grows. The cloud provides
customers with more freedom to scale as they please without restarting the server.
They can also choose from several payment options to avoid overspending on
resources they won't need.
 Remote Work: Cloud computing helps users work remotely. Remote workers can
safely and quickly access corporate data via their devices, including laptops and
smartphones. Employees who work remotely can also communicate with each other
and perform their jobs effectively using the cloud.

4. Explain:

 System metric : System metrics are measurement types found in the system. Each
resource that can be monitored for performance, availability, reliability, and other
attributes has one or more metrics about which data can be collected.
Cloud metrics are the logs of data that a cloud infrastructure or application
generates. Using the data, organizations can detect, monitor, and respond to various
changes in costs, security, and performance of their cloud environments.
By collecting, analyzing, and acting on the right cloud metrics, you can:
o Optimize billing and cloud costs
o Ensure compliance and security management
o Troubleshoot issues as soon as they arise to prevent them from affecting
your entire infrastructure
o Manage service level agreements (SLAs)
o Manage application performance
o Properly allocate resources in the cloud
Example:-
Uptime or availability,CPU utilization,Memory utilization,Requests per minute,Disk
utilization

 Load testing: Load testing is testing where we check an application's performance by


applying some load, which is either less than or equal to the desired load. Here, load
means that when N-number of users using the application simultaneously or sending
the request to the server at a time.
Load testing will help to detect the maximum operating capacity of an application
and any blockages or bottlenecks.
It governs how the software application performs while being accessed by several
users at the same time.
The load testing is mainly used to test the Client/Server's performance and
applications that are web-based.
In other words, we can say the load testing is used to find whether the organization
used for compering the application is necessary or not, and the performance of the
application is maintained when it is at the maximum of its user load.
Generally, load testing is used to signify how many concurrent users handle the
application and the application's scale in terms of hardware, network capacity etc.
The load testing is essential because of the following factor:
o It guarantees the system and its consistency and performance.
o The load testing is necessary if any code changes occur in the application that
may affect the application's performance.
o It is important because it reproduces the real user scenarios.
o It helps find the system's bottlenecks.

 Resource ceilings: A capacity planner seeks to meet the future demands on a system
by providing the additional capacity to fulfill those demands. Many people equate
capacity planning with system optimization (or performance tuning, if you like), but
they are not the same. System optimization aims to get more production from the
system components you have. Capacity planning measures the maximum amount of
work that can be done using the current technology and then adds resources to do
more work as needed. For capacity planning we do Load testing. During load testing
over a certain load for a particular server, the CPU, RAM, and Disk I/O utilization
rates rise but do not reach their resource ceiling. In this instance, the Network I/O
reaches its maximum 100-percent utilization at about 50 percent of the tested load,
and this factor is the current system resource ceiling.
 Network capacity: Network capacity refers to the maximum information transfer
limit of a network at any given point. Whether you build a new network or perform a
network refresh, maintaining sufficient capacity is essential to run uninterrupted
business operations. Proper network capacity planning allows you to calculate the
current resource usage, document resource capacity limits, and predict potential
changes in demand. It also requires you to regularly monitor key performance
metrics—such as network bandwidth and latency—to identify shortcomings or
issues capable of affecting your network's availability or throughput in the long run.
With proper capacity planning, you can also estimate the additional hardware,
software, and workforce needed to manage your network operations in the near
future.
Key Metrics: Network bandwidth, CPU and memory utilization, Network latency and
throughput, network jitter and packet loss
 SaaS: Software As A Service (SAAS) allows users to run existing online applications
and it is a model software that is deployed as a hosting service and is accessed over
Output Rephrased/Re-written Text the internet or software delivery model during
which software and its associated data are hosted centrally and accessed using their
client, usually an online browser over the web. SAAS services are used for the
development and deployment of modern applications.
It allows software and its functions to be accessed from anywhere with good
internet connection device and a browser. An application is hosted centrally and also
provides access to multiple users across various locations via the internet.
Example: Google Workspace, Salesforce
 PaaS: Platform As A Service (PAAS) is a cloud delivery model for applications
composed of services managed by a third party. It provides elastic scaling of your
application which allows developers to build applications and services over the
internet and the deployment models include public, private and hybrid.
Basically, it is a service where a third-party provider provides both software and
hardware tools to the cloud computing. The tools which are provided are used by
developers. PAAS is also known as Application PAAS. It helps us to organize and
maintain useful applications and services.
Example: Microsoft Azure, AWS

5. Explain in detail underlying principles of Parallel and Distributed Computing.

Parallel computing:-
Parallel computing is the process of performing computational tasks across multiple
processors at once to improve computing speed and efficiency. It divides tasks into
sub-tasks and executes them simultaneously through different processors.
There are three main types, or “levels,” of parallel computing: bit, instruction, and
task.
o Bit-level parallelism: Uses larger “words,” which is a fixed-sized piece of data
handled as a unit by the instruction set or the hardware of the processor, to
reduce the number of instructions the processor needs to perform an
operation.
o Instruction-level parallelism: Employs a stream of instructions to allow
processors to execute more than one instruction per clock cycle (the
oscillation between high and low states within a digital circuit).
o Task-level parallelism: Runs computer code across multiple processors to run
multiple tasks at the same time on the same data.

Distributed computing:-
Distributed computing is the process of connecting multiple computers via a local
network or wide area network so that they can act together as a single ultra-
powerful computer capable of performing computations that no single computer
within the network would be able to perform on its own.
Distributed computers offer two key advantages:
o Easy scalability: Just add more computers to expand the system.
o Redundancy: Since many different machines are providing the same service,
that service can keep running even if one (or more) of the computers goes
down.

You might also like