CC CAT-II Answer Key Word
CC CAT-II Answer Key Word
(Answer Key)
Subject code: 20CS7LT2 Subject Name: Cloud Computing Total Marks: 100
PART – A [10 x 2 = 20 Marks ]
• Service Aggregation: Combines multiple services into a unified solution, ensuring they work together
seamlessly to meet specific requirements.
• Service Arbitrage: Allows selecting and switching between services dynamically based on pricing or
performance without modifying the system.
7. Outline the six layers of cloud services. (2 Mark)
• Physical Layer: Underlying hardware infrastructure like servers and storage.
• Virtualization Layer: Abstraction of physical resources into virtual instances.
• Infrastructure Layer: Provides core computing resources (IaaS).
• Platform Layer: Tools for application development (PaaS).
• Application Layer: End-user software applications (SaaS).
• Business Process Layer: Orchestration of workflows and business operations.
8. Classify the three basic cloud security enforcements. (2 Mark)
• Authentication: Verifying user identities.
• Authorization: Controlling access to resources based on user roles.
• Auditing: Tracking and monitoring activities to ensure compliance and detect anomalies.
9. Interpret the different modules in Hadoop framework. (2 Mark)
• HDFS (Hadoop Distributed File System): Provides distributed storage.
• MapReduce Engine: Executes parallel data processing tasks.
• YARN (Yet Another Resource Negotiator): Manages cluster resources and job scheduling.
• Common Utilities: Shared libraries and utilities used across modules.
10. Infer about the Federated applications in cloud. (2 Mark)
Federated applications span multiple cloud providers or environments, allowing seamless integration
and interoperability. They enable organizations to use resources across clouds while maintaining centralized
management and ensuring compliance.
PART – B [5 x 16 = 80 Marks ]
11a. Build in detail about the evolution of cloud computing with a neat diagram.
Cloud computing allows users to access a wide range of services stored in the cloud or on the
Internet. Cloud computing services include computer resources, data storage, apps, servers, development tools,
and networking protocols. It is most commonly used by IT companies and for business purposes.
The phrase “Cloud Computing” was first introduced in the 1950s to describe internet-related services, and it
evolved from distributed computing to the modern technology known as cloud computing. Cloud services include
those provided by Amazon, Google, and Microsoft. Cloud computing allows users to access a wide range of services
stored in the cloud or on the Internet. Cloud computing services include computer resources, data storage, apps,
servers, development tools, and networking protocols.
Distributed Systems
Distributed System is a composition of multiple independent systems but all of them are depicted as
a single entity to the users. The purpose of distributed systems is to share resources and also use them
effectively and efficiently. Distributed systems possess characteristics such as scalability, concurrency,
continuous availability, heterogeneity, and independence in failures. But the main problem with this system
was that all the systems were required to be present at the same geographical location. Thus to solve this
problem, distributed computing led to three more types of computing and they were-Mainframe computing,
cluster computing, and grid computing.
Mainframe Computing
Mainframes which first came into existence in 1951 are highly powerful and reliable computing
machines. These are responsible for handling large data such as massive input-output operations. Even today
these are used for bulk processing tasks such as online transactions etc. These systems have almost no
downtime with high fault tolerance. After distributed computing, these increased the processing capabilities
of the system. But these were very expensive. To reduce this cost, cluster computing came as an alternative
to mainframe technology.
Cluster Computing
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine in the
cluster was connected to each other by a network with high bandwidth. These were way cheaper than those
mainframe systems. These were equally capable of high computations. Also, new nodes could easily be
added to the cluster if it was required. Thus, the problem of the cost was solved to some extent but the
problem related to geographical restrictions still pertained. To solve this, the concept of grid computing was
introduced.
Grid Computing
In 1990s, the concept of grid computing was introduced. It means that different systems were placed
at entirely different geographical locations and these all were connected via the internet. These systems
belonged to different organizations and thus the grid consisted of heterogeneous nodes. Although it solved
some problems but new problems emerged as the distance between the nodes increased. The main problem
which was encountered was the low availability of high bandwidth connectivity and with it other network
associated issues. Thus. cloud computing is often referred to as “Successor of grid computing”.
Virtualization
Virtualization was introduced nearly 40 years back. It refers to the process of creating a virtual layer
over the hardware which allows the user to run multiple instances simultaneously on the hardware. It is a key
technology used in cloud computing. It is the base on which major cloud computing services such as
Amazon EC2, VMware vCloud, etc work on. Hardware virtualization is still one of the most common types
of virtualization.
Web 2.0
Web 2.0 is the interface through which the cloud computing services interact with the clients. It is
because of Web 2.0 that we have interactive and dynamic web pages. It also increases flexibility among web
pages. Popular examples of web 2.0 include Google Maps, Facebook, Twitter, etc. Needless to say, social
media is possible because of this technology only. It gained major popularity in 2004.
Service Orientation
A service orientation acts as a reference model for cloud computing. It supports low-cost, flexible,
and evolvable applications. Two important concepts were introduced in this computing model. These
were Quality of Service (QoS) which also includes the SLA (Service Level Agreement) and Software as a
Service (SaaS).
Utility Computing
Utility Computing is a computing model that defines service provisioning techniques for services
such as compute services along with other major services such as storage, infrastructure, etc which are
provisioned on a pay-per-use basis.
Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that are
hosted on the internet instead of the computer’s hard drive or local server. Cloud computing is also referred
to as Internet-based computing, it is a technology where the resource is provided as a service through the
Internet to the user. The data that is stored can be files, images, documents, or any other storable document.
Advantages of Cloud Computing
• Cost Saving
• Data Redundancy and Replication
• Ransomware/Malware Protection
• Flexibility
• Reliability
• High Accessibility
• Scalable
Disadvantages of Cloud Computing
• Internet Dependency
• Issues in Security and Privacy
• Data Breaches
• Limitations on Control
Or
11.b. Construct the principles of parallel and distributed computing with a neat diagram in detail.
Parallel Computing and Distributed Computing are two important models of computing that have
important roles in today’s high-performance computing. Both are designed to perform a large number of
calculations breaking down the processes into several parallel tasks; however, they differ in structure, function,
and utilization. Therefore, in the following article, there is a dissection of Parallel Computing and Distributed
Computing, their gains, losses, and applications.
Parallel Computing
In parallel computing multiple processors performs multiple tasks assigned to them simultaneously.
Memory in parallel systems can either be shared or distributed. Parallel computing provides concurrency and
saves time and money.
Examples
Blockchains, Smartphones, Laptop computers, Internet of Things, Artificial intelligence and machine
learning, Space shuttle, Supercomputers are the technologies that uses Parallel computing technology.
Advantages of Parallel Computing
• Increased Speed: In this technique, several calculations are executed concurrently hence reducing the
time of computation required to complete large scale problems.
• Efficient Use of Resources: Takes full advantage of all the processing units it is equipped with hence
making the best use of the machine’s computational power.
• Scalability: Also the more processors built into the system, the more complex problems can be solved
within a short time.
• Improved Performance for Complex Tasks: Best suited for activities which involve a large numerical
calculation like, number simulation, scientific analysis and modeling and data processing.
Disadvantages of Parallel Computing
• Complexity in Programming: Parallel writing programming that is used in organizing tasks in a parallel
manner is even more difficult than that of serial programming.
• Synchronization Issues: Interaction of various processors when operating concurrently can become
synchronized and result in problem areas on the overall communication.
• Hardware Costs: The implementation of parallel computing does probably involve the use of certain
components such as multi-core processors which could possibly be costly than the normal systems.
Distributed Computing
In distributed computing we have multiple autonomous computers which seems to the user as single
system. In distributed systems there is no shared memory and computers communicate with each other
through message passing. In distributed computing a single task is divided among different computers.
Examples
Artificial Intelligence and Machine Learning, Scientific Research and High-Performance Computing,
Financial Sectors, Energy and Environment sectors, Internet of Things, Blockchain and Cryptocurrencies are
the areas where distributed computing is used.
Advantages of Distributed Computing
• Fault Tolerance: The failure of one node means that this node is no longer part of the computations,
but that is not fatal for the entire computation since there are other computers participating in the
process thereby making the system more reliable.
• Cost-Effective: Builds upon existing hardware and has flexibility in utilizing commodity machines
instead of the need to have expensive and specific processors for its use.
• Scalability: The distributed systems have the ability to scale and expand horizontally through the
addition of more machines in the networks and therefore they can take on greater workloads and
processes.
• Geographic Distribution: Distributed computing makes it possible to execute tasks at different points
thereby eliminating latencies.
Disadvantages of Distributed Computing
• Complexity in Management: The task of managing a distributed system itself can be made more
difficult since it may require dealing with the latency and/or failure of a network as well as issues
related to synchronizing the information to be distributed.
• Communication Overhead: Inter node communication requirements can actually hinder the package
transfer between nodes that are geographically distant and hence the overall performance is greatly
compromised.
12.a Construct about Service Oriented Architecture with a neat sketch in detail.
Service-Oriented Architecture (SOA) is a stage in the evolution of application
development and/or integration. It defines a way to make software components reusable
using the interfaces.
SOA allows users to combine a large number of facilities from existing services to
form applications.
SOA encompasses a set of design principles that structure system development
and provide means for integrating components into a coherent and decentralized system.
SOA-based computing packages functionalities into a set of interoperable services, which
can be integrated into different software systems belonging to separate business
domains.
Service provider: The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To advertise
services, the provider can publish them in a registry, together with a service contract
that specifies the nature of the service, how to use it, the requirements for the service,
and the fees charged.
Service consumer: The service consumer can locate the service metadata in the
registry and develop the required client components to bind and use the service.
Services might aggregate information and data retrieved from other services or create
workflows of services to satisfy the request of a given service consumer. This practice is
known as service orchestration Another important interaction pattern is service
choreography, which is the coordinated interaction of services without a single point of
control.
Components of SOA:
Service reusability: In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.
Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
Platform independent: SOA allows making a complex application by combining
services picked from different sources, independent of the platform.
Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
Scalability: Services can run on different servers within an environment, this
increases scalability.
Disadvantages of SOA:
High overhead: A validation of input parameters of services is done whenever
services interact this decreases performance as it increases load and response time.
High investment: A huge initial investment is required for SOA.
Or
The figure below shows another example of how the ICICI bank ATM used the service
provided by SBI bank to access the database of the customer which cannot be accessed
by the ICICI bank ATM directly.
How do web services work?
Web services use the request-response method to communicate among
applications. For any communication, we need a medium and a common format that can
be understood by everyone, in the case of web services medium is the internet and the
common format is the XML (Extensible Markup Language) format as every programming
language can understand the XML markup language.
A client is the one that requests some service from the server that is known as the
service provider. The request is sent through a message which is in common XML format
and in response to that request, the service provider will respond with a message in a
common format (.i.e. XML).
Web Service Components
SOAP: As mentioned above SOAP stands for Simple Object Access Protocol. It is
the protocol stating how the communication between the application is going to happen.
WSDL: It stands for Web Services Description Language which is the XML
document containing the rules for communication among different software. It defines
that:
• How that service can be accessed by the system requesting for it from other
systems
• What is the name of the service
• What are the specific parameter needed for accessing that service, what would be
the return type
• What are the error messages that would be displayed in case of any issue while
accessing the data.
Universal Description, Discovery, and Integration is the full form for the UDDI. It
is a directory that provides us the detail that which software needs to be contacted for
the particular type of data.
Types of web services
There are mainly two types of web services:
SOAP web services: SOAP stands for Simple Object Access Protocol. These
protocols are based on XML which is a lightweight data exchange language. These
protocols are independent of language and can be run on any platform.
SOAP supports both stateful and stateless operations. Stateful means that the server
keeps track of the information received from the client on each request. While Stateless
means that each request contains enough information about the state of the client and
thus server does not need to bother about saving the state of the client thus increasing
the speed of communication.
Many companies such as IBM, Microsoft are producing an implementation of SOAP into
their systems.
RESTful web services: It stands for Representational State Transfer. They are
also language and platform-independent and are faster in comparison to SOAP.
Nowadays RESTful web services are more used than SOAP. They treat the data as
resources. RESTful web services return data in JSON format or XML format. These web
services create the object and send the state of the object in response to the client’s
requests, that’s why known as Representational State Transfer.
The skill of standing out online is needed more than ever in today's digital world.
Whether to become a young marketer or grow one's brand, it is most important to learn
digital marketing concepts. With our Digital Marketing Live Training Program, you can
do all that, right in line with what any willing student needs. The course focuses on
hands-on training in SEO, Social Media, and Content Marketing.
13.a. Build the Deployment Models of a cloud computing environment with illustrations
Cloud Deployment Model functions as a virtual computing environment with a
deployment architecture that varies depending on the amount of data you want to store
and who has access to the infrastructure.
Types of Cloud Computing Deployment Models
The cloud deployment model identifies the specific type of cloud environment
based on ownership, scale, and access, as well as the cloud’s nature and purpose. The
location of the servers you’re utilizing and who controls them are defined by a cloud
deployment model.
Different types of cloud computing deployment models are described below.
• Public Cloud
• Private Cloud
• Hybrid Cloud
• Community Cloud
• Multi-Cloud
Public Cloud
The public cloud makes it possible for anybody to access systems and services.
The public cloud may be less secure as it is open to everyone. The public cloud is one in
which cloud infrastructure services are provided over the internet to the general people
or major industry groups. For example, Google App Engine etc.
Public Cloud
Advantages of the Public Cloud Model
• Minimal Investment: Because it is a pay-per-use service, there is no substantial
upfront fee, making it excellent for enterprises that require immediate access to
resources.
• No setup cost: The entire infrastructure is fully subsidized by the cloud service
providers, thus there is no need to set up any hardware.
• Infrastructure Management is not required: Using the public cloud does not
necessitate infrastructure management.
• No maintenance: The maintenance work is done by the service provider (not
users).
• Dynamic Scalability: To fulfill your company’s needs, on-demand resources are
accessible.
Disadvantages of the Public Cloud Model
• Less secure: Public cloud is less secure as resources are public so there is no
guarantee of high-level security.
• Low customization: It is accessed by many public so it can’t be customized
according to personal requirements.
Private Cloud
The private cloud deployment model is the exact opposite of the public cloud
deployment model. It’s a one-on-one environment for a single user (customer). There is
no need to share your hardware with anyone else. The distinction between private and
public clouds is in how you handle all of the hardware. It is also called the “internal
cloud” & it refers to the ability to access systems and services within a given border or
organization.
Private Cloud
Hybrid Cloud
Advantages of the Hybrid Cloud Model
• Flexibility and control: Businesses with more flexibility can design personalized
solutions that meet their particular needs.
• Cost: Because public clouds provide scalability, you’ll only be responsible for
paying for the extra capacity if you require it.
• Security: Because data is properly separated, the chances of data theft by
attackers are considerably reduced.
Disadvantages of the Hybrid Cloud Model
• Difficult to manage: Hybrid clouds are difficult to manage as it is a combination
of both public and private cloud. So, it is complex.
• Slow data transmission: Data transmission in the hybrid cloud takes place
through the public cloud so latency occurs.
Community Cloud
It allows systems and services to be accessible by a group of organizations. It is a
distributed system that is created by integrating the services of different clouds to
address the specific needs of a community, industry, or business.
Community Cloud
Advantages of the Community Cloud Model
• Cost Effective
• Security
• Shared resources
• Collaboration and data sharing
Disadvantages of the Community Cloud Model
• Limited Scalability.
• Rigid in customization:
Multi-Cloud
We’re talking about employing multiple cloud providers at the same time under this
paradigm, as the name implies. It’s similar to the hybrid cloud deployment approach,
which combines public and private cloud resources.
Advantages of the Multi-Cloud Model
• Reduced Latency:
• High availability of service
Disadvantages of the Multi-Cloud Model
• Complex: The combination of many clouds makes the system complex and
bottlenecks may occur.
• Security issue: Due to the complex structure, there may be loopholes to which a
hacker can take advantage hence, makes the data insecure.
Or
PaaS cloud computing platform is created for the programmer to develop, test,
run, and manage the applications.
Characteristics of PaaS
There are the following characteristics of PaaS -
o Accessible to various users via the same development application.
o Integrates with web services and databases.
Builds on virtualization technology, so resources can easily be scaled up or
o
down as per the organization's need.
o Support multiple languages and frameworks.
o Provides an ability to "Auto-scale".
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App
Engine, Apache Stratos, Magento Commerce Cloud, and OpenShift.
Characteristics of SaaS
There are the following characteristics of SaaS -
o Managed from a central location
o Hosted on a remote server
o Accessible over the internet
o Users are not responsible for hardware and software updates. Updates are applied
automatically.
o The services are purchased on the pay-as-per-use basis
Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx,
ZenDesk, Slack, and GoToMeeting.
Difference between IaaS, PaaS, and SaaS
The below table shows the difference between IaaS, PaaS, and SaaS -
It is used by network
It is used by developers. It is used by end users.
architects.
4. Multi-Cloud Libraries: Clients use a uniform cloud API as a library to create their
own brokers. Inter clouds that employ libraries make it easier to use clouds consistently.
Java library J-clouds, Python library Apache Lib-Clouds, and Ruby library Apache Delta-
Cloud are a few examples of multiple multi-cloud libraries.
Difficulties with Inter-Cloud Research
The needs of cloud users frequently call for various resources, and the needs are
often variable and unpredictable. This element creates challenging issues with resource
provisioning and application service delivery. The difficulties in federating cloud
infrastructures include the following:
• Prediction of Application Service Behaviour: It is essential that the system be
able to predict customer wants and service s. It cannot make rational decisions to
dynamically scale up and down until it has the ability to predict. It is necessary to
construct prediction and forecasting models
• Flexible Service-Resource Mapping: Due to high operational expenses and
energy demands, it is crucial to enhance efficiency, cost-effectiveness, and usage. A
difficult process of matching services to cloud resources results from the system’s need
to calculate the appropriate software and hardware combinations.
• Techniques for Optimization Driven by Economic Models: An approach to
decision-making that is driven by the market and looks for the best possible
combinations of services and deployment strategies is known as combinatorial
optimization
• Integration and Interoperability: SMEs may not be able to migrate to the cloud
since they have a substantial number of on-site IT assets, such as business applications.
Due to security and privacy concerns, sensitive data in an organization may not be
moved to the cloud.
• Monitoring System Components at Scale: In spite of the distributed nature of
the system’s components, centralized procedures are used for system management and
monitoring.
Or
14.a. Build the purpose of IAM and its functional architecture with a neat diagram in
detail.
Identity Access Management is used by the root user (administrator) of the organization.
The users represent one person within the organization, and the users can be grouped in
that all the users will have the same privileges to the services.
Shared Responsibility Model for Identity Access Management
Cloud Service Provider (CSP)
• Infrastructure (Global Security of the Network)
• Configuration and Vulnerability Analysis
• Compliance Validation
Customer
• Users, Groups, Roles, Policies Management and Monitoring
• Use IAM tools to apply for appropriate permissions.
• Analyze access patterns and review permissions.
Identity Access Management (IAM) ensures secure access to cloud resources. For those
looking to integrate IAM within a DevOps pipeline, the DevOps Engineering – Planning
to Production course covers security best practices in cloud environments using
DevOps.
The Architecture of Identity Access Management
User Management:- It consists of activities for the control and management over the
identity life cycles.
Authentication Management:- It consists of activities for effectively controlling
and managing the processes for determining which user is trying to access the services
and whether those services are relevant to him or not.
Authorization Management:- It consists of activities for effectively controlling
and managing the processes for determining which services are allowed to access
according to the policies made by the administrator of the organization.
Access Management:- It is used in response to a request made by the user
wanting to access the resources with the organization.
Data Management and Provisioning:- The authorization of data and identity are
carried towards the IT resource through automated or manual processes.
Monitoring and Auditing:- Based on the defined policies the monitoring,
auditing, and reporting are done by the users regarding their access to resources within
the organization.
Operational Activities of IAM:- In this process, we onboard the new users on the
organization’s system and application and provide them with necessary access to the
services and data. Deprovisioning works completely opposite in that we delete or
deactivate the identity of the user and de-relinquish all the privileges of the user.
Credential and Attribute Management:- Credentials are bound to an individual
user and are verified during the authentication process. These processes generally
include allotment of username, static or dynamic password, handling the password
expiration, encryption management, and access policies of the user.
Entitlement Management:- These are also known as authorization policies in
which we address the provisioning and de-provisioning of the privileges provided to the
user for accessing the databases, applications, and systems. We provide only the
required privileges to the users according to their roles. It can also be used for security
purposes.
Identity Federation Management:- In this process, we manage the relationships
beyond the internal networks of the organization that is among the different
organizations. The federations are the associate of the organization that came together
for exchanging information about the user’s resources to enable collaboration and
transactions.
Centralization of Authentication and Authorization:- It needs to be developed
in order to build custom authentication and authorization features into their application,
it also promotes the loose coupling architecture.
15.a Build the basic of Google App Engine infrastructure programming model.
A scalable runtime environment, Google App Engine is mostly used to run Web
applications. App Engine makes it easier to develop scalable and high-performance Web
apps. Google’s applications will scale up and down in response to shifting demand.
Croon tasks, communications, scalable data stores, work queues, and in-memory
caching are some of these services.
The development and hosting platform Google App Engine, which powers anything
from web programming for huge enterprises to mobile apps, uses the same
infrastructure as Google’s large-scale internet services. It is a fully managed PaaS
(platform as a service) cloud computing platform that uses in-built services to run your
apps.
After creating a Cloud account, you may Start Building your App
• Using the Go template/HTML package
• Python-based webapp2 with Jinja2
• PHP and Cloud SQL
• using Java’s Maven
The app engine runs the programmers on various servers while “sandboxing”
them. The app engine allows the program to use more resources in order to handle
increased demands. The app engine powers programs like Snapchat, Rovio, and
Khan Academy. For an in-depth understanding of how GAE fits into DevOps workflows,
the DevOps Engineering – Planning to Production course offers step-by-step
instructions on using GAE in production environments
Features of App Engine
Runtimes and Languages
To create an application for an app engine, you can use Go, Java, PHP, or Python.
You can develop and test an app locally using the SDK’s deployment toolkit. Each
language’s SDK and nun time are unique. Your program is run in a:
• Java Run Time Environment version 7
• Python Run Time environment version 2.7
• PHP runtime’s PHP 5.4 environment
• Go runtime 1.2 environment
Generally Usable Features
These are protected by the service-level agreement and depreciation policy of the
app engine. These include communications, process management, computing, data
storage, retrieval, and search, as well as app configuration and management. Features
like the HRD migration tool, Google Cloud SQL, logs, datastore, dedicated Memcached,
blob store, Memcached, and search are included in the categories of data storage,
retrieval, and search.
Features in Preview
In a later iteration of the app engine, these functions will undoubtedly be made
broadly accessible. However, because they are in the preview, their implementation may
change in ways that are backward-incompatible. Sockets, MapReduce, and the Google
Cloud Storage Client Library are a few of them.
Experimental Features
The experimental features include Prospective Search, Page Speed, OpenID,
Restore/Backup/Datastore Admin, Task Queue Tagging, MapReduce, and Task Queue
REST API. App metrics analytics, datastore admin/backup/restore, task queue tagging,
MapReduce, task queue REST API, OAuth, prospective search, OpenID, and Page Speed
are some of the experimental features.
Third-Party Services
As Google provides documentation and helper libraries to expand the capabilities
of the app engine platform, your app can perform tasks that are not built into the core
product you are familiar with as app engine.
Advantages of Google App Engine
The Google App Engine has a lot of benefits that can help you advance your app
ideas. This comprises:
1. Infrastructure for Security: The Internet infrastructure that Google uses is
arguably the safest in the entire world. Since the application data and code are hosted on
extremely secure servers, there has rarely been any kind of illegal access to date.
2. Faster Time to Market: For every organization, getting a product or service to
market quickly is crucial. When it comes to quickly releasing the product, encouraging
the development and maintenance of an app is essential. A firm can grow swiftly with
Google Cloud App Engine’s assistance.
3. Quick to Start: You don’t need to spend a lot of time prototyping or deploying the
app to users because there is no hardware or product to buy and maintain.
4. Easy to Use: The tools that you need to create, test, launch, and update the
applications are included in Google App Engine (GAE).
5. Rich set of APIs & Services: A number of built-in APIs and services in Google
App Engine enable developers to create strong, feature-rich apps.
6. Scalability: This is one of the deciding variables for the success of any software.
When using the Google app engine to construct apps, you may access technologies like
GFS, Big Table, and others that Google uses to build its own apps.
7. Performance and Reliability: Among international brands, Google ranks among
the top ones. Therefore, you must bear that in mind while talking about performance
and reliability.
8. Cost Savings: To administer your servers, you don’t need to employ engineers or
even do it yourself. The money you save might be put toward developing other areas of
your company.
Or
1. 15b. Apply the open stack platform for cloud computing in detail with a neat
diagram.
OpenStack Architecture
OpenStack is an open-standard and free platform for cloud computing. Mostly, it
is deployed as IaaS (Infrastructure-as-a-Service) in both private and public clouds where
various virtual servers and other types of resources are available for users. This platform
combines irrelated components that networking resources, storage resources, multi-
vendor hardware processing tools, and control diverse throughout the data center.
Various users manage it by the command-line tools, RESTful web services, and web-
based dashboard.
OpenStack contains a modular architecture along with several code names for the
components.
Nova (Compute)
Nova is a project of OpenStack that facilitates a way for provisioning compute instances.
Nova supports building bare-metal servers, virtual machines. It has narrow support for
various system containers. It executes as a daemon set on the existing Linux server's top
for providing that service.
This component is specified in Python. It uses several external libraries of Python
such as SQL toolkit and object-relational mapper (SQLAlchemy), AMQP messaging
framework (Kombu), and concurrent networking libraries (Eventlet). Nova is created to be
scalable horizontally.
Managing end-to-end performance needs tracking metrics through Swift, Cinder,
Neutron, Keystone, Nova, and various other types of services. Additionally, analyzing
RabbitMQ which is applied by the services of OpenStack for massage transferring. Each
of these services produces their log files. It must be analyzed especially within the
organization-level infrastructure.
Neutron (Networking)
Neutron can be defined as a project of OpenStack. It gives "network
connectivity as a service" facility between various interface devices (such as vNICs) that
are handled by some other types of OpenStack services (such as Nova). It operates the
Networking API of OpenStack.
It handles every networking facet for VNI (Virtual Networking Infrastructure) and
various authorization layer factors of PNI (Physical Networking Infrastructure) in an
OpenStack platform. OpenStack networking allows projects to build advanced topologies
of the virtual network. It can include some of the services like VPN (Virtual Private
Network) and a firewall.
Neutron permits dedicated static DHCP or IP addresses. It permits Floating IP addresses
to enable the traffic to be rerouted.
Horizon (Dashboard)
Horizon is a canonical implementation of Dashboard of OpenStack which offers the
web-based UI to various OpenStack services such as Keystone, Swift, Nova, etc.
Dashboard shifts with a few central dashboards like a "Settings Dashboard", a "System
Dashboard", and a "User Dashboard". It envelopes Core Support. The horizon
application ships using the API abstraction set for many projects of Core OpenStack to
facilitate a stable and consistent collection of reusable techniques for developers. With
these abstractions, the developers working on OpenStack Horizon do not require to be
familiar intimately with the entire OpenStack project's APIs.