0% found this document useful (0 votes)
18 views

Notes Unit IV & V

Uploaded by

kumarijyoti19197
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Notes Unit IV & V

Uploaded by

kumarijyoti19197
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

A theoretical model for cloud computing services is referred to as the “inter-cloud”

or “cloud of clouds.” combining numerous various separate clouds into a single


fluid mass for on-demand operations Simply put, the inter-cloud would ensure that a
cloud could utilize resources outside of its range using current agreements with
other cloud service providers. There are limits to the physical resources and the
geographic reach of any one cloud.
Need of Inter-Cloud
Due to their Physical Resource limits, Clouds have certain Drawbacks:
 When a cloud’s computational and storage capacity is completely depleted, it is
unable to serve its customers.
 The Inter-Cloud addresses these circumstances when one cloud would access the
computing, storage, or any other resource of the infrastructures of other clouds.
Benefits of the Inter-Cloud Environment include:
 Avoiding vendor lock-in to the cloud client
 Having access to a variety of geographical locations, as well as enhanced
application resiliency.
 Better service level agreements (SLAs) to the cloud client
 Expand-on-demand is an advantage for the cloud provider.
Inter-Cloud Resource Management
A cloud’s infrastructure’s processing and storage capacity could be exhausted.
combining numerous various separate clouds into a single fluid mass for on-demand
operations. Simply put, the intercloud would ensure that a cloud could utilize
resources outside of its range combining numerous various separate clouds into a
single fluid mass for on-demand operations. Such requests for service allocations
received by its clients would still be met by it.
Managing resources across multiple clouds requires careful orchestration and
automation. For those looking to streamline this process within a DevOps pipeline,
the DevOps Engineering – Planning to Production course covers how to integrate
cloud resources effectively using DevOps tools and best practices.
Types of Inter-Cloud Resource Management
1. Federation Clouds: A federation cloud is a kind of inter-cloud where several
cloud service providers willingly link their cloud infrastructures together to
exchange resources. Cloud service providers in the federation trade resources in
an open manner. With the aid of this inter-cloud technology, private cloud
portfolios, as well as government clouds (those utilized and owned by non-profits
or the government), can cooperate.
2. Multi-Cloud: A client or service makes use of numerous independent clouds in a
multi-cloud. A multi-cloud ecosystem lacks voluntarily shared infrastructure
across cloud service providers. It is the client’s or their agents’ obligation to
manage resource supply and scheduling. This strategy is utilized to use assets
from both public and private cloud portfolios. These multi-cloud kinds include
services and libraries.
Topologies used In InterCloud Architecture
1. Peer-to-Peer Inter-Cloud Federation: Clouds work together directly, but they
may also utilize distributed entities as directories or brokers. Clouds communicate
and engage in direct negotiation without the use of intermediaries. The peer-to-peer
federation intercloud projects are RESERVOIR (Resources and Services
Virtualization without Barriers Project).
2. Centralized Inter-Cloud Federation: In the cloud, resource sharing is carried
out or facilitated by a central body. The central entity serves as a registry for the
available cloud resources. The inter-cloud initiatives Dynamic Cloud Collaboration
(DCC), and Federated Cloud Management leverage centralized inter-cloud
federation.

3. Multi-Cloud Service: Clients use a service to access various clouds. The cloud
client hosts a service either inside or externally. The services include elements for
brokers. The inter-cloud initiatives OPTIMUS, contrail, MOSAIC, STRATOS, and
commercial cloud management solutions leverage multi-cloud services.
4. Multi-Cloud Libraries: Clients use a uniform cloud API as a library to create
their own brokers. Inter clouds that employ libraries make it easier to use clouds
consistently. Java library J-clouds, Python library Apache Lib-Clouds, and Ruby
library Apache Delta-Cloud are a few examples of multiple multi-cloud libraries.

Difficulties with Inter-Cloud Research


The needs of cloud users frequently call for various resources, and the needs are
often variable and unpredictable. This element creates challenging issues with
resource provisioning and application service delivery. The difficulties in federating
cloud infrastructures include the following:
 Prediction of Application Service Behaviour: It is essential that the system be
able to predict customer wants and service Behaviour. It cannot make rational
decisions to dynamically scale up and down until it has the ability to predict. It is
necessary to construct prediction and forecasting models. Building models that
accurately learn and fit statistical functions suited to various behaviors is a
difficult task. Correlating a service’s various behaviors can be more difficult.
 Flexible Service-Resource Mapping: Due to high operational expenses and
energy demands, it is crucial to enhance efficiency, cost-effectiveness, and usage.
A difficult process of matching services to cloud resources results from the
system’s need to calculate the appropriate software and hardware combinations.
The QoS targets must be met simultaneously with the highest possible system
utilization and efficiency throughout the mapping of services.
 Techniques for Optimization Driven by Economic Models: An approach to
decision-making that is driven by the market and looks for the best possible
combinations of services and deployment strategies is known as combinatorial
optimization. It is necessary to create optimization models that address both
resource- and user-centered QoS objectives.
 Integration and Interoperability: SMEs may not be able to migrate to the cloud
since they have a substantial number of on-site IT assets, such as business
applications. Due to security and privacy concerns, sensitive data in an
organization may not be moved to the cloud. In order for on-site assets and cloud
services to work together, integration and interoperability are required. It is
necessary to find solutions for the problems of identity management, data
management, and business process orchestration.
 Monitoring System Components at Scale: In spite of the distributed nature of
the system’s components, centralized procedures are used for system
management and monitoring. The management of multiple service queues and a
high volume of service requests raises issues with scalability, performance, and
reliability, making centralized approaches ineffective. Instead, decentralized
messaging and indexing models-based architectures are required, which can be
used for service monitoring and management services.
Resource Provisioning and Resource Provisioning Methods

Resource provisioning in the context of Computer Science refers to the technique of allocating
virtualized resources to users based on their demands and needs. It involves creating and
assigning virtual machines to users in order to meet quality of service parameters and match
upcoming workloads.
The allocation of resources and services from a cloud provider to a customer is known as
resource provisioning in cloud computing, sometimes called cloud provisioning. Resource
provisioning is the process of choosing, deploying, and managing software (like load balancers
and database server management systems) and hardware resources (including CPU, storage, and
networks) to assure application performance.
To effectively utilize the resources without going against SLA and achieving the QoS
requirements, Static Provisioning/Dynamic Provisioning and Static/Dynamic Allocation of
resources must be established based on the application needs. Resource over and under-
provisioning must be prevented. Power usage is another significant restriction. Care should be
taken to reduce power consumption, dissipation, and VM placement. There should be
techniques to avoid excess power consumption.
Therefore, the ultimate objective of a cloud user is to rent resources at the lowest possible cost,
while the objective of a cloud service provider is to maximize profit by effectively distributing
resources.

Importance of Cloud Provisioning:

 Scalability: Being able to actively scale up and down with flux in demand for resources is
one of the major points of cloud computing
 Speed: Users can quickly spin up multiple machines as per their usage without the need for
an IT Administrator
 Savings: Pay as you go model allows for enormous cost savings for users, it is facilitated by
provisioning or removing resources according to the demand.
Efficient resource allocation is critical for optimizing cloud infrastructure. To explore how
resource management integrates with DevOps practices, the DevOps Engineering – Planning to
Production course offers insights into automating resource allocation in cloud environments.

Challenges of Cloud Provisioning:

 Complex management: Cloud providers have to use various different tools and techniques
to actively monitor the usage of resources
 Policy enforcement: Organisations have to ensure that users are not able to access the
resources they shouldn’t.
 Cost: Due to automated provisioning costs may go very high if attention isn’t paid to placing
proper checks in place. Alerts about reaching the cost threshold are required.

Tools for Cloud Provisioning:

 Google Cloud Deployment Manager


 IBM Cloud Orchestrator
 AWS CloudFormation
 Microsoft Azure Resource Manager

Types of Cloud Provisioning:


 Static Provisioning or Advance Provisioning: Static provisioning can be used successfully
for applications with known and typically constant demands or workloads. In this instance,
the cloud provider allows the customer with a set number of resources. The client can
thereafter utilize these resources as required. The client is in charge of making sure the
resources aren’t overutilized. This is an excellent choice for applications with stable and
predictable needs or workloads. For instance, a customer might want to use a database server
with a set quantity of CPU, RAM, and storage.
When a consumer contracts with a service provider for services, the supplier makes the
necessary preparations before the service can begin. Either a one-time cost or a monthly fee
is applied to the client.
Resources are pre-allocated to customers by cloud service providers. This means that before
consuming resources, a cloud user must select how much capacity they need in a static
sense. Static provisioning may result in issues with over or under-provisioning.
 Dynamic provisioning or On-demand provisioning: With dynamic provisioning, the
provider adds resources as needed and subtracts them as they are no longer required. It
follows a pay-per-use model, i.e. the clients are billed only for the exact resources they use.
Consumers must pay for each use of the resources that the cloud service provider allots to
them as needed and when necessary. The pay-as-you-go model is another name for this.
“Dynamic provisioning” techniques allow VMs to be moved on-the-fly to new computing
nodes within the cloud, in situations where demand by applications may change or vary. This
is a suitable choice for programs with erratic and shifting demands or workloads. For
instance, a customer might want to use a web server with a configurable quantity of CPU,
memory, and storage. In this scenario, the client can utilize the resources as required and
only pay for what is really used. The client is in charge of ensuring that the resources are not
oversubscribed; otherwise, fees can skyrocket.
 Self-service provisioning or user self-provisioning: In user self-provisioning, sometimes
referred to as cloud self-service, the customer uses a web form to acquire resources from the
cloud provider, sets up a customer account, and pays with a credit card. Shortly after,
resources are made accessible for consumer use.

The following are the three models of cloud provisioning:


 Advanced provisioning. The customer signs a formal contract of
service with the cloud provider. ...
 Dynamic provisioning. Cloud resources are deployed to match a
customer's fluctuating demands. ...
 User self-provisioning.

Introduction
Cloud Exchange (CEx) serves as a market maker, bringing service providers and
users together. The University of Melbourne proposed it under Intercloud architecture
(Cloudbus). It supports brokering and exchanging cloud resources for scaling
applications across multiple clouds. It aggregates the infrastructure demands from
application brokers and evaluates them against the available supply. It supports the
trading of cloud services based on competitive economic models such as commodity
markets and auctions.
Global exchange of cloud resources
Source: https://snscourseware.org/snsctnew/files/1583815568.pdf

Now we will talk about various entities of the Global exchange of cloud resources.

Introduction to Cloud Computing


Entities of the Global exchange of cloud resources
Now we will talk about the various entities of the global exchange of cloud resources.
Market directory
A market directory is an extensive database of resources, providers, and participants
using the resources. Participants can use the market directory to find providers or
customers with suitable offers.
Auctioneers
Auctioneers clear bids and ask from market participants regularly. Auctioneers sit
between providers and customers and grant the resources available in the Global
exchange of cloud resources to the highest bidding customer.
Brokers
Brokers mediate between consumers and providers by buying capacity from the
provider and sub-leasing these to the consumers. They must select consumers whose
apps will provide the most utility. Brokers may also communicate with resource
providers and other brokers to acquire or trade resource shares. To make decisions,
these brokers are equipped with a negotiating module informed by the present
conditions of the resources and the current demand.
Service-level agreements(SLAs)
The service level agreement (SLA) highlights the details of the service to be provided
in terms of metrics that have been agreed upon by all parties, as well as penalties for
meeting and failing to meet the expectations.
The consumer participates in the utility market via a resource management proxy that
chooses a set of brokers based on their offering.SLAs are formed between the
consumer and the brokers, which bind the latter to offer the guaranteed resources.
After that, the customer either runs their environment on the leased resources or uses
the provider's interfaces to scale their applications.
Providers
A provider has a price-setting mechanism that determines the current price for their
source based on market conditions, user demand, and the current degree of utilization
of the resource.
Based on an initial estimate of utility, an admission-control mechanism at a provider's
end selects the auctions to participate in or to negotiate with the brokers.
Resource management system
The resource management system provides functionalities such as advance
reservations that enable guaranteed provisioning of resource capacity.

Global Exchange of
Cloud Resources
An open compute exchange may provide a centralized point where
cloud consumers and providers would be able to make decisions
based upon which cloud resources they may want to utilize as well
as a clearing house for providers with excess capacity.Another
example may be based on geographical cloud computing.

It provides network sevices for enterprises,new media providers and


telecoms carriers. Their services cover cloud centric connectivity
from managed SD-WAN and hybrid networks,to direct cloud
connections and 100Gbps+waves.

 Market directory
 Banking system
 Brokers
 Price setting mechanism
 Admission control mechanism
 Resource management system
 Consumers utility function
 Resource management proxy
Challenges:
 Unwillingness to shift from traditional controlled environment.
 Regulatory pressure
 How to obtain restitution in case of SLA violation.

Security Overview

Cloud computing security or, more simply, cloud security refers to a


broad set of policies, technologies, applications, and controls utilized
to product virtualized IP, data, applications, services, and the
associated infrastructure of cloud computing. It is a sub-domain of
computer security.
Top Cloud
Computing Security
Challenges?
Misconfiguration

Cloud computing has emerged as a widely accepted approach for accessing resources
remotely while simultaneously reducing costs. Cloud computing security concerns can be
effectively mitigated through proper configuration of your cloud resources.
Misconfiguration is the top cloud computing security challenge, as users must
appropriately protect their data and applications in the cloud.

To prevent this cloud security threat, users must ensure their data is protected, and
applications are configured correctly. It can be accomplished using a cloud storage
service that offers security features such as encryption or access control. Additionally,
implementing security measures such as authentication and password requirements can
help protect sensitive data in the cloud. By taking these steps, users can increase the
security of their cloud computing infrastructure and stay protected from cyber threats.

Unauthorized Access

Unauthorized access to data is one of the most common cloud security problems
businesses face. The cloud provides a convenient way for companies to store and access
data, which can make data vulnerable to cyber threats. Security and cloud computing
threats can include unauthorized access to user data, theft of data, and malware attacks.

To protect their data from these threats, businesses must ensure that only authorized users
can access it. Another security feature businesses can implement is encrypting sensitive
data in the cloud. It will help ensure that only authorized users can access it. By
implementing security measures such as encryption and backup procedures, businesses
can safeguard their data from unauthorized access and ensure its integrity.

Hijacking of Accounts
Hijacking of user accounts is one of the major cloud security issues. Using cloud-based
applications and services will increase the risk of account hijacking. As a result, users
must be vigilant about protecting their passwords and other confidential information to
stay secure in the cloud.

Users can protect themselves using strong passwords, security questions, and two-factor
authentication to access their accounts. They can also monitor their account activity and
take steps to protect themselves from unauthorized access or usage. This will help ensure
that hackers cannot access their data or hijack their accounts. Overall, staying vigilant
about security and updating your security measures are vital to the security of cloud
computing.

Lack of Visibility

Cloud computing has made it easier for businesses to access and store their data online,
but this convenience comes with risks. As a result, companies need to protect their data
from unauthorized access and theft. However, cloud computing also poses security threats
due to its reliance on remote servers. To ensure that their systems are vulnerable only to
authorized sources, businesses must implement security measures such as strong
authentication, data loss prevention (DLP), data breach detection, and data breach
response.

With cloud computing, visibility is vital, and businesses must regularly audit security
operations and procedures to detect vulnerabilities and threats before they become a real
problem. By taking the necessary precautions and implementing security in cloud
computing, organizations can ensure that their data remains secure in this cloud-based
environment.

Data Privacy/Confidentiality

Data privacy and confidentiality are critical issues when it comes to cloud computing.
With cloud computing, businesses can access their data from anywhere worldwide,
raising concerns about securing cloud computing. Companies don’t have control over
who can access their data, so they must ensure that only authorized users can access it.
Data breaches can happen when hackers gain access to company data. In the coming
years, there will be even more data privacy and confidentiality issues due to the rise of
big data and the increased use of cloud computing in business.

Data privacy and confidentiality issues will continue to be essential concerns for
businesses in the years ahead as data-intensive applications grow in popularity. Managed
IT Services Charlotte experts helps to ensure proper security measures and data practice
for a cloud-ready organization to avoid data breach risks.
External Sharing of Data

External data sharing is one of the leading security issues in cloud computing that
businesses face. This issue arises when data is shared with third-party providers who must
be vetted and approved by the organization. As a result, external data sharing can lead to
the loss of critical business information and theft and fraud. To prevent these issues in
cloud security, companies must implement robust security measures, such as encryption
and data management practices. In addition, it will help ensure that sensitive data remains
secure and confidential.

By implementing appropriate security measures, companies can protect their data from
unauthorized access and ensure its reliability and integrity. Overall, external data sharing
is a major cloud security concern that businesses must address to stay ahead of the
competition.

Legal and Regulatory Compliance

A cloud is a powerful tool that can help organizations reduce costs and improve the
efficiency of their operations. However, cloud computing presents new security
challenges that must be addressed to protect data and ensure compliance with legal and
regulatory requirements.

Organizations must ensure data security for cloud and comply with legal and regulatory
requirements to ensure the safety and integrity of their cloud-based systems. Cyber threats
such as malware, data breaches, and phishing are just a few challenges organizations face
when using cloud computing.

To combat these cloud based security issues, it’s vital to perform regular security audits,
maintain up-to-date security configurations, implement robust authentication procedures,
use strong passwords, use multi-factor authentication methods, and regularly update
software and operating systems. While cloud computing can increase the risk of
cyberattacks, organizations that are diligent about their security posture can stay ahead of
their competitors in this rapidly changing market.

Unsecure Third-party Resources

Third-party resources are applications, websites, and services outside the cloud provider’s
control. These resources may have cloud security vulnerabilities, and unauthorized access
to your data is possible. Additionally, unsecured third-party resources may allow hackers
to access your cloud data. These vulnerabilities can put your security at risk. Therefore,
ensuring that only trusted, secure resources are used for cloud computing is essential. In
addition, it will help ensure that only authorized individuals access data and reduce the
risk of unauthorized data loss or breach.

Unsecured third-party resources can pose a threat to cloud security, especially when
interacting with sensitive data in cloud storage accounts. Hackers can access these
resources to gain access to your cloud data and systems. Implementing strong security
controls such as multi-factor authentication and enforcing strict password policies can
help safeguard against this risk. In addition, by restricting access to only trusted
resources, you can ensure that only authorized individuals access data and reduce the risk
of unauthorized data loss or breach.

In this, we will discuss the overview of cloud computing, its need, and mainly our
focus to cover the security issues in Cloud Computing. Let’s discuss it one by one.
Cloud Computing :
Cloud Computing is a type of technology that provides remote services on the
internet to manage, access, and store data rather than storing it on Servers or local
drives. This technology is also known as Serverless technology. Here the data can
be anything like Image, Audio, video, documents, files, etc.

Need of Cloud Computing :


Before using Cloud Computing, most of the large as well as small IT companies use
traditional methods i.e. they store data in Server, and they need a separate Server
room for that. In that Server Room, there should be a database server, mail server,
firewalls, routers, modems, high net speed devices, etc. For that IT companies have
to spend lots of money. In order to reduce all the problems with cost Cloud
computing come into existence and most companies shift to this technology.
Security Issues in Cloud Computing :
There is no doubt that Cloud Computing provides various Advantages but there are
also some security issues in cloud computing. Below are some following Security
Issues in Cloud Computing as follows.
1. Data Loss –
Data Loss is one of the issues faced in Cloud Computing. This is also known as
Data Leakage. As we know that our sensitive data is in the hands of Somebody
else, and we don’t have full control over our database. So, if the security of cloud
service is to break by hackers then it may be possible that hackers will get access
to our sensitive data or personal files.

2. Interference of Hackers and Insecure API’s –


As we know, if we are talking about the cloud and its services it means we are
talking about the Internet. Also, we know that the easiest way to communicate
with Cloud is using API. So it is important to protect the Interface’s and API’s
which are used by an external user. But also in cloud computing, few services are
available in the public domain which are the vulnerable part of Cloud Computing
because it may be possible that these services are accessed by some third parties.
So, it may be possible that with the help of these services hackers can easily hack
or harm our data.

3. User Account Hijacking –


Account Hijacking is the most serious security issue in Cloud Computing. If
somehow the Account of User or an Organization is hijacked by a hacker then
the hacker has full authority to perform Unauthorized Activities.

4. Changing Service Provider –


Vendor lock-In is also an important Security issue in Cloud Computing. Many
organizations will face different problems while shifting from one vendor to
another. For example, An Organization wants to shift from AWS
Cloud to Google Cloud Services then they face various problems like shifting of
all data, also both cloud services have different techniques and functions, so they
also face problems regarding that. Also, it may be possible that the charges
of AWS are different from Google Cloud, etc.

5. Lack of Skill –
While working, shifting to another service provider, need an extra feature, how to
use a feature, etc. are the main problems caused in IT Companies who doesn’t
have skilled Employees. So it requires a skilled person to work with Cloud
Computing.

6. Denial of Service (DoS) attack –


This type of attack occurs when the system receives too much traffic. Mostly
DoS attacks occur in large organizations such as the banking sector, government
sector, etc. When a DoS attack occurs, data is lost. So, in order to recover data, it
requires a great amount of money as well as time to handle it.
7. Shared Resources: Cloud computing relies on a shared infrastructure. If one
customer’s data or applications are compromised, it may potentially affect other
customers sharing the same resources, leading to a breach of confidentiality or
integrity.
8. Compliance and Legal Issues: Different industries and regions have specific
regulatory requirements for data handling and storage. Ensuring compliance with
these regulations can be challenging when data is stored in a cloud environment
that may span multiple jurisdictions.
9. Data Encryption: While data in transit is often encrypted, data at rest can be
susceptible to breaches. It’s crucial to ensure that data stored in the cloud is
properly encrypted to prevent unauthorized access.
10. Insider Threats: Employees or service providers with access to cloud systems
may misuse their privileges, intentionally or unintentionally causing data
breaches. Proper access controls and monitoring are essential to mitigate these
threats.
11. Data Location and Sovereignty: Knowing where your data physically resides is
important for compliance and security. Some cloud providers store data in
multiple locations globally, and this may raise concerns about data sovereignty
and who has access to it.
12. Loss of Control: When using a cloud service, you are entrusting a third party
with your data and applications. This loss of direct control can lead to concerns
about data ownership, access, and availability.
13. Incident Response and Forensics: Investigating security incidents in a cloud
environment can be complex. Understanding what happened and who is
responsible can be challenging due to the distributed and shared nature of cloud
services.
14. Data Backup and Recovery: Relying on cloud providers for data backup and
recovery can be risky. It’s essential to have a robust backup and recovery strategy
in place to ensure data availability in case of outages or data loss.
15. Vendor Security Practices: The security practices of cloud service providers
can vary. It’s essential to thoroughly assess the security measures and
certifications of a chosen provider to ensure they meet your organization’s
requirements.
16. IoT Devices and Edge Computing: The proliferation of IoT devices and edge
computing can increase the attack surface. These devices often have limited
security controls and can be targeted to gain access to cloud resources.
17. Social Engineering and Phishing: Attackers may use social engineering tactics
to trick users or cloud service providers into revealing sensitive information or
granting unauthorized access.
18. Inadequate Security Monitoring: Without proper monitoring and alerting
systems in place, it’s challenging to detect and respond to security incidents in a
timely manner.

OpenStack Architecture

Introduction

OpenStack is an open-standard and free platform for cloud computing. Mostly, it is


deployed as IaaS (Infrastructure-as-a-Service) in both private and public clouds
where various virtual servers and other types of resources are available for users. This
platform combines irrelated components that networking resources, storage resources,
multi-vendor hardware processing tools, and control diverse throughout the data
center. Various users manage it by the command-line tools, RESTful web services,
and web-based dashboard.

In 2010, OpenStack began as the joint project of NASA and Rackspace Hosting. It
was handled by the OpenStack Foundation which is a non-profit collective entity
developed in 2012 September for promoting the OpenStack community and software.
50+ enterprises have joined this project.
Architecture of OpenStack

OpenStack contains a modular architecture along with several code names for the
components.

Introduction to OpenStack
Last Updated : 19 Sep, 2023


It is a free open standard cloud computing platform that first came into existence on
July 21′ 2010. It was a joint project of Rackspace Hosting and NASA to make cloud
computing more ubiquitous in nature. It is deployed as Infrastructure-as-a-
service(IaaS) in both public and private clouds where virtual resources are made
available to the users. The software platform contains interrelated components that
control multi-vendor hardware pools of processing, storage, networking resources
through a data center. In OpenStack, the tools which are used to build this platform
are referred to as “projects”. These projects handle a large number of services
including computing, networking, and storage services. Unlike virtualization, in
which resources such as RAM, CPU, etc are abstracted from the hardware using
hypervisors, OpenStack uses a number of APIs to abstract those resources so that
users and the administrators are able to directly interact with the cloud services.

OpenStack components

Apart from various projects which constitute the OpenStack platform, there are nine
major services namely Nova, Neutron, Swift, Cinder, Keystone, Horizon, Ceilometer,
and Heat. Here is the basic definition of all the components which will give us a basic
idea about these components.
1. Nova (compute service): It manages the compute resources like creating, deleting,
and handling the scheduling. It can be seen as a program dedicated to the
automation of resources that are responsible for the virtualization of services and
high-performance computing.
2. Neutron (networking service): It is responsible for connecting all the networks
across OpenStack. It is an API driven service that manages all networks and IP
addresses.
3. Swift (object storage): It is an object storage service with high fault tolerance
capabilities and it used to retrieve unstructured data objects with the help of Restful
API. Being a distributed platform, it is also used to provide redundant storage
within servers that are clustered together. It is able to successfully manage
petabytes of data.
4. Cinder (block storage): It is responsible for providing persistent block storage
that is made accessible using an API (self- service). Consequently, it allows users
to define and manage the amount of cloud storage required.
5. Keystone (identity service provider): It is responsible for all types of
authentications and authorizations in the OpenStack services. It is a directory-
based service that uses a central repository to map the correct services with the
correct user.
6. Glance (image service provider): It is responsible for registering, storing, and
retrieving virtual disk images from the complete network. These images are stored
in a wide range of back-end systems.
7. Horizon (dashboard): It is responsible for providing a web-based interface for
OpenStack services. It is used to manage, provision, and monitor cloud resources.
8. Ceilometer (telemetry): It is responsible for metering and billing of services used.
Also, it is used to generate alarms when a certain threshold is exceeded.
9. Heat (orchestration): It is used for on-demand service provisioning with auto-
scaling of cloud resources. It works in coordination with the ceilometer.
These are the services around which this platform revolves around. These services
individually handle storage, compute, networking, identity, etc. These services are the
base on which the rest of the projects rely on and are able to orchestrate services,
allow bare-metal provisioning, handle dashboards, etc.

Features of OpenStack
 Modular architecture: OpenStack is designed with a modular architecture that
enables users to deploy only the components they need. This makes it easier to
customize and scale the platform to meet specific business requirements.
 Multi-tenancy support: OpenStack provides multi-tenancy support, which
enables multiple users to access the same cloud infrastructure while maintaining
security and isolation between them. This is particularly important for cloud
service providers who need to offer services to multiple customers.
 Open-source software: OpenStack is an open-source software platform that is
free to use and modify. This enables users to customize the platform to meet their
specific requirements, without the need for expensive proprietary software
licenses.
 Distributed architecture: OpenStack is designed with a distributed architecture
that enables users to scale their cloud infrastructure horizontally across multiple
physical servers. This makes it easier to handle large workloads and improve
system performance.
 API-driven: OpenStack is API-driven, which means that all components can be
accessed and controlled through a set of APIs. This makes it easier to automate and
integrate with other tools and services.
 Comprehensive dashboard: OpenStack provides a comprehensive dashboard that
enables users to manage their cloud infrastructure and resources through a user-
friendly web interface. This makes it easier to monitor and manage cloud resources
without the need for specialized technical skills.
 Resource pooling: OpenStack enables users to pool computing, storage, and
networking resources, which can be dynamically allocated and de-allocated based
on demand. This enables users to optimize resource utilization and reduce waste.

Advantages of using OpenStack

 It boosts rapid provisioning of resources due to which orchestration and scaling up


and down of resources becomes easy.
 Deployment of applications using OpenStack does not consume a large amount of
time.
 Since resources are scalable therefore they are used more wisely and efficiently.
 The regulatory compliances associated with its usage are manageable.

Disadvantages of using OpenStack

 OpenStack is not very robust when orchestration is considered.


 Even today, the APIs provided and supported by OpenStack are not compatible
with many of the hybrid cloud providers, thus integrating solutions becomes
difficult.
 Like all cloud service providers OpenStack services also come with the risk of
security breaches.

Nova (Compute)
Nova is a project of OpenStack that facilitates a way for provisioning compute
instances. Nova supports building bare-metal servers, virtual machines. It has narrow
support for various system containers. It executes as a daemon set on the
existing Linux server's top for providing that service.

This component is specified in Python. It uses several external libraries of Python


such as SQL toolkit and object-relational mapper (SQLAlchemy), AMQP messaging
framework (Kombu), and concurrent networking libraries (Eventlet). Nova is created
to be scalable horizontally. We procure many servers and install configured services
identically, instead of switching to any large server.

Because of its boundless integration into organization-level infrastructure, particularly


Nova performance, and general performance of monitoring OpenStack, scaling
facility has become a progressively important issue.

Managing end-to-end performance needs tracking metrics through Swift, Cinder,


Neutron, Keystone, Nova, and various other types of services. Additionally, analyzing
RabbitMQ which is applied by the services of OpenStack for massage transferring.
Each of these services produces their log files. It must be analyzed especially within
the organization-level infrastructure.

What is Virtualbox?
It allows users to create and run virtual machines on their computers, enabling
them to install and run multiple operating systems simultaneously. This is
particularly useful for testing software, experimenting with different
configurations, and isolating environments for increased security.
Last Updated : 01 Feb, 2024


In this article, you will learn about Oracle Virtual Box. Some of you may also use a
virtual box to run more than one operating system on your computer or Linux. So
basically, it is software that enables us to run operating systems like Ubuntu,
Windows, and many other operating systems. I describe its origin, usage, and
ownership. Have a look at the article and comment below if you have any queries
related to this article.
What is a Virtual Box?
Oracle Corporation developed a virtual box, and it is also known as VB. It acts like a
hypervisor for X86 machines. Originally, it was created by Innotek GmbH, and they
made it accessible to all in 2007. After that, it was bought by Sun Microsoft in 2008.
Since then, it has been developed by Oracle, and people refer to it as Oracle VM
Virtual Box. VirtualBox comes in a variety of flavors, depending on the operating
system for which it is configured. VirtualBox Ubuntu is more common, however,
VirtualBox for Windows is also popular. With the introduction of Android phones,
VirtualBox for Android has emerged as the new face of virtual machines in
smartphones.
Use of Virtual Box
In general, a Virtual Box is a software virtualization program that may be run as an
application on any operating system. It's one of the numerous advantages of Virtual
Box. It supports the installation of additional operating systems, known as Guest OS.
It may then set up and administer free guest virtual machines, each with its own
operating system and virtual environment. Virtual Box is supported by several
operating systems, including Windows XP, Windows 7, Linux, Windows Vista, Mac
OS X, Solaris, and Open Solaris. Windows, Linux, OS/2, BSD, Haiku, and other
guest operating systems are supported in various versions and derivatives.
It can be used in following project
 Software portability
 Application development
 System testing and debugging
 Network simulation
 General computing
Advantages of Virtual Box
 Isolation - A virtual machine's isolated environment is suitable for testing software
or running programmes that demand more resources than are accessible in other
settings.
 Virtualization- VirtualBox allows users to run another OS on a single computer
without purchasing a new device. It generates a virtual machine that functions just
like a real computer, with its own processing cores, RAM, and hard disc space
dedicated only to the virtual environment.
 Cross-Platform Compatability- VirtualBox can run Windows, Linux, Solaris,
Open Solaris, and MacOS as its host operating system (OS). Users do not have to
be concerned about compatibility difficulties while setting up virtual computers on
numerous devices or platforms.
 Easy Control Panel- VirtualBox's simple control interface makes it easier to
configure parameters like CPU cores and RAM. Users may begin working on their
projects within a few moments of installing the software program on their PCs or
laptops.
 Multiple Modes- Users have control over how they interact with their
installations. Whether in full-screen mode, flawless window mode, scaled window
mode, or 3D graphics acceleration. This allows users to customize their experience
according to the kind of project they are working on.
DIsadvantages of Virtual Box
 VirtualBox, however, relies on the computer's hardware. Thus, the virtual machine
will only be effective if the host is faster and more powerful. As a result,
VirtualBox is dependent on its host computer.
 If the host computer has any defects and the OS only has one virtual machine, just
that system will be affected; if there are several virtual machines operating on the
same OS, all of them would be affected.
 Though these machines act like real machines, they are not genuine; hence, the
host CPU must accept the request, resulting in delayed usability. So, when
compared to real computers, these virtual machines are not as efficient.
Differences Between VMware and VirtualBox
VMware VirtualBox

The VMware is virtualization


The VirtualBox is the
software that helps us to run the
Oracle tool to provide the
multiple operating systems in the
host based virtualization.
single host.

It is used for enterprise and home It is used for educational


purposes. and private purposes.

Offers virtualization at
Offers virtualization at the
both hardware and
hardware level.
software levels.

The proprietary license can


The proprietary license can be
be availed for only $79.99
availed for only $79.99 dollars.
dollars.

We can run it on the Linux,


We can run it on the Linux,
Windows, Solaris and
Windows and macOS.
macOS.

It is not a open source tool. It is an open-source tool.

It offers Virtual Machine


It offers limited Virtual
encryption with the
Machine encryption.
extension pack.

It supports VDI, VHD,


It supports VMDK disk format. VMDK and HDD disk
formats

It offers shared storage


It does not offer any shared
support with NFS, CIFS,
storage support.
and iSCSI.

VirtualBox does not allow


VMware offers ease of access to
ease of access as
the users.
compared to VMware.

It provides the complicated user It provides the user


VMware VirtualBox

interface. friendly interface.

For the functionality of USB


Out of the box USB device support
2.0/3.0, Extension Pack is
is provided in it.
required.

Video memory is limited to


Video memory is limited to 2 GB.
128 MB.

3D acceleration needs to
3D acceleration is default enabled.
be manually enabled.

It is a type 1 Hypervisor. It is a type 2 Hypervisor.

Map Reduce in Hadoop


Last Updated : 31 May, 2023


One of the three components of Hadoop is Map Reduce. The first component of
Hadoop that is, Hadoop Distributed File System (HDFS) is responsible for storing the
file. The second component that is, Map Reduce is responsible for processing the
file.
MapReduce has mainly 2 tasks which are divided phase-wise.In first phase, Map is
utilised and in next phase Reduce is utilised.
Map and Reduce interfaces
Suppose there is a word file containing some text. Let us name this file as sample.txt.
Note that we use Hadoop to deal with huge files but for the sake of easy explanation
over here, we are taking a text file as an example. So, let’s assume that this sample.txt
file contains few lines as text. The content of the file is as follows:
Hello I am GeeksforGeeks
How can I help you
How can I assist you
Are you an engineer
Are you looking for coding
Are you looking for interview questions
what are you doing these days
what are your strengths
Hence, the above 8 lines are the content of the file. Let’s assume that while storing
this file in Hadoop, HDFS broke this file into four parts and named each part as
first.txt, second.txt, third.txt, and fourth.txt. So, you can easily see that the above file
will be divided into four equal parts and each part will contain 2 lines. First two lines
will be in the file first.txt, next two lines in second.txt, next two in third.txt and the last
two lines will be stored in fourth.txt. All these files will be stored in Data Nodes and
the Name Node will contain the metadata about them. All this is the task of HDFS.
Now, suppose a user wants to process this file. Here is what Map-Reduce comes into
the picture. Suppose this user wants to run a query on this sample.txt. So, instead of
bringing sample.txt on the local computer, we will send this query on the data. To
keep a track of our request, we use Job Tracker (a master service). Job Tracker traps
our request and keeps a track of it. Now suppose that the user wants to run his query
on sample.txt and want the output in result.output file. Let the name of the file
containing the query is query.jar. So, the user will write a query like:
J$hadoop jar query.jar DriverCode sample.txt result.output
1. query.jar : query file that needs to be processed on the input file.
2. sample.txt: input file.
3. result.output: directory in which output of the processing will be received.
So, now the Job Tracker traps this request and asks Name Node to run this request on
sample.txt. Name Node then provides the metadata to the Job Tracker. Job Tracker
now knows that sample.txt is stored in first.txt, second.txt, third.txt, and fourth.txt. As
all these four files have three copies stored in HDFS, so the Job Tracker
communicates with the Task Tracker (a slave service) of each of these files but it
communicates with only one copy of each file which is residing nearest to
it. Note: Applying the desired code on local first.txt, second.txt, third.txt and fourth.txt
is a process., This process is called Map. In Hadoop terminology, the main file
sample.txt is called input file and its four subfiles are called input splits. So, in
Hadoop the number of mappers for an input file are equal to number of input splits
of this input file. In the above case, the input file sample.txt has four input splits
hence four mappers will be running to process it. The responsibility of handling these
mappers is of Job Tracker. Note that the task trackers are slave services to the Job
Tracker. So, in case any of the local machines breaks down then the processing over
that part of the file will stop and it will halt the complete process. So, each task
tracker sends heartbeat and its number of slots to Job Tracker in every 3 seconds.
This is called the status of Task Trackers. In case any task tracker goes down, the Job
Tracker then waits for 10 heartbeat times, that is, 30 seconds, and even after that if it
does not get any status, then it assumes that either the task tracker is dead or is
extremely busy. So it then communicates with the task tracker of another copy of the
same file and directs it to process the desired code over it. Similarly, the slot
information is used by the Job Tracker to keep a track of how many tasks are being currently
served by the task tracker and how many more tasks can be assigned to it. In this way, the Job
Tracker keeps track of our request. Now, suppose that the system has generated output for
individual first.txt, second.txt, third.txt, and fourth.txt. But this is not the user’s desired output. To
produce the desired output, all these individual outputs have to be merged or reduced to a single
output. This reduction of multiple outputs to a single one is also a process which is done
by REDUCER. In Hadoop, as many reducers are there, those many number of output files
are generated. By default, there is always one reducer per cluster. Note: Map and Reduce are
two different processes of the second component of Hadoop, that is, Map Reduce. These are also
called phases of Map Reduce. Thus we can say that Map Reduce has two phases. Phase 1 is Map
and Phase 2 is Reduce.
Functioning of Map Reduce
Now, let us move back to our sample.txt file with the same content. Again it is being divided into
four input splits namely, first.txt, second.txt, third.txt, and fourth.txt. Now, suppose we want to
count number of each word in the file. That is the content of the file looks like:
Hello I am GeeksforGeeks
How can I help you
How can I assist you
Are you an engineer
Are you looking for coding
Are you looking for interview questions
what are you doing these days
what are your strengths
Then the output of the ‘word count’ code will be like:

Hello - 1
I-1
am - 1
geeksforgeeks - 1
How - 2 (How is written two times in the entire file)
Similarly
Are - 3
are - 2
….and so on
Thus in order to get this output, the user will have to send his query on the data.
Suppose the query ‘word count’ is in the file wordcount.jar. So, the query will look
like:
J$hadoop jar wordcount.jar DriverCode sample.txt result.output

Levels of Federation and Services in Cloud


Last Updated : 30 Mar, 2023


Pre-requisite:- Cloud Federation


The implementation and management of several internal and external cloud
computing services to meet business demands is known as cloud federation,
sometimes known as federated cloud. A global cloud system combines community,
private, and public clouds into scalable computing platforms. By utilizing a common
standard to link the cloud environments of several cloud providers, a federated cloud
is built.
Levels of Cloud Federation

Cloud Federation stack

Each level of the cloud federation poses unique problems and functions at a different
level of the IT stack. Then, several strategies and technologies are needed. The
answers to the problems encountered at each of these levels when combined form a
reference model for a cloud federation.
Conceptual Level

The difficulties in presenting a cloud federation as an advantageous option for using


services rented from a single cloud provider are addressed at the conceptual level. At
this level, it’s crucial to define the new opportunities that a federated environment
brings in comparison to a single-provider solution and to explicitly describe the
benefits of joining a federation for service providers or service users.
At this level, the following factors need attention:
 The reasons that cloud providers would want to join a federation.
 Motivations for service users to use a federation.
 Benefits for service providers who rent their services to other service providers.
Once a provider joins the federation, they have obligations.
 Agreements on trust between suppliers.
 Consumers versus transparency.
The incentives of service providers and customers joining a federation stand out
among these factors as being the most important.

Logical and Operational Level

The obstacles in creating a framework that allows the aggregation of providers from
various administrative domains within the context of a single overlay infrastructure,
or cloud federation, are identified and addressed at the logical and operational level of
a federated cloud.
Policies and guidelines for cooperation are established at this level. Additionally, this
is the layer where choices are made regarding how and when to use a service from
another provider that is being leased or leveraged. The operational component
characterizes and molds the dynamic behavior of the federation as a result of the
decisions made by the individual providers, while the logical component specifies the
context in which agreements among providers are made and services are negotiated.
At this level, MOCC is put into precise and becomes a reality. At this stage, it’s
crucial to deal with the following difficulties:
 How ought a federation should be portrayed?
 How should a cloud service, a cloud provider, or an agreement be modeled and
represented?
 How should the regulations and standards that permit providers to join a federation
be defined?
 What procedures are in place to resolve disputes between providers?
 What obligations does each supplier have to the other?
 When should consumers and providers utilize the federation?
 What categories of services are more likely to be rented than purchased?
 Which percentage of the resources should be leased, and how should we value the
resources that are leased?
Both academia and industry have potential at the logical and operational levels.

Infrastructure Level
The technological difficulties in making it possible for various cloud computing
systems to work together seamlessly are dealt with at the infrastructure level. It
addresses the technical obstacles keeping distinct cloud computing systems from
existing inside various administrative domains. These obstacles can be removed by
using standardized protocols and interfaces.
The following concerns should be addressed at this level:
 What types of standards ought to be applied?
 How should interfaces and protocols be created to work together?
 Which technologies should be used for collaboration?
 How can we design platform components, software systems, and services that
support interoperability?
Only open standards and interfaces allow for interoperability and composition
amongst various cloud computing companies. Additionally, the Cloud Computing
Reference Model has layers that each has significantly different interfaces and
protocols.

You might also like