11 - April - LSBU FutureIntTech - LSBU - 110423
11 - April - LSBU FutureIntTech - LSBU - 110423
ABSTRACT
Over the past few years, as a result of its adaptability and scalability, OpenStack has been an
increasingly popular method for the deployment of streaming multimedia services. In this
report, we investigate the process of deploying multimedia streaming services on OpenStack
by analyzing the performance of the platform using a number of Key Performance Indicators
(KPIs) such as response time, throughput, scalability, availability, fault tolerance, network
latency, frame rate, and data transfer rate. In addition to this, we investigate how the
performance of multimedia streaming services is affected by the architecture of OpenStack,
as well as other virtualization technologies and cloud computing models. The research comes
to a close with some recommendations for improving the performance of multimedia
streaming services deployed on OpenStack and increasing their capacity to scale.
3
Table of Contents
INTRODUCTION:..............................................................................................................................2
PRIVATE VS PUBLIC CLOUD MANAGEMENT..........................................................................3
OPENSTACK ARCHITECTURE AND THEORETICAL BACKGROUND.................................5
EXPERIMENTATION SET-UP.........................................................................................................7
Step 1: Prepare an OpenStack Virtual Machine.................................................................................7
Step 2, establish a private network....................................................................................................7
Step 3: Creating a security group.......................................................................................................7
Step 4: Uploading custom image.......................................................................................................7
Step 5: Creating an instance..............................................................................................................8
Step 6: Installing and configuring multimedia tools...........................................................................8
Step 7: Configuring multimedia service.............................................................................................8
Step 8: Testing multimedia service....................................................................................................8
PERFORMANCE EVALUATION...................................................................................................11
CONCLUSION..................................................................................................................................14
REFERENCES..................................................................................................................................16
4
INTRODUCTION:
This paper will examine the pros and cons of using a private cloud to host an application, as
opposed to a public cloud. To facilitate this evaluation, we shall establish four KPIs. In
addition, we'll be working on a project to roll out an app by means of the OpenStack cloud
tools, an open-source platform for creating and managing both public and private cloud
infrastructure.
As part of the project, we will choose multimedia services to deploy using OpenStack. We'll
have to do some digging online to find the tools and information we need to get this project
off the ground, and that includes things like open-source tools like FFMPEG, kurento or
opencast
Now that we know what cloud resources to employ and what application to choose, we can
construct key performance indicators (KPIs). The following is the structure of this report:
Section 2 will compare and contrast public cloud deployment with private cloud deployment,
Section 3 will describe how the project was carried out with the help of OpenStack cloud
tools, and Section 4 will present the project's analytics and key performance indicators.
5
It is believed that we have previously gained the knowledge and abilities necessary to finish
this project, so this paper will not provide a full description of how to set up and utilize
OpenStack. The research will only cover private and public cloud implementation, skipping
over other possibilities like hybrid cloud deployment.
This report's overarching goal is to help students have a firm grasp of the benefits and
drawbacks of private and public cloud deployment, as well as the means by which to deploy
an application or service utilizing OpenStack cloud technologies. It will also explain how to
develop key performance indicators for evaluating the two deployment strategies.
There are four primary deployment models for the cloud; these are public, private,
communal, and hybrid. The whole populace or a sizable group of businesses can use public
clouds. A third-party providing cloud services handles this. Public cloud services are gaining
popularity among business enterprises as a result of advancements in cloud technology. The
cloud has attracted the attention of many different types of entities, including critical
infrastructure providers and financial institutions ( McGrath & Tselios, 2015). . A private
cloud is one that is owned and operated by a single entity, with a primary focus on managing
the system for visualizing resources and automating services utilized and adapted by different
departments and stakeholder groups. A private cloud is a cloud that only serves an individual
company's internal network. Connected private clouds can function as a hybrid cloud. Private
clouds are those that are run exclusively for a single company ( Nunes et al, 2019) .
6
Community cloud refers to a cloud environment that is either specifically designed for, or
makes its services available to, a community of professionals working together on a project.
This could include, for example, a company's subcontractors, branches, allies, etc. It's also
possible that this is a government cloud only for use by governmental agencies (Beloglazov
& Buyya, 2015) . The term "hybrid cloud model" refers to the practice of utilizing both
private and public cloud services. Thus, hybrid cloud features characteristics of both private
and public clouds. It enables businesses to keep sensitive information and apps in-house
while sending less important tasks to the cloud ( McGrath & Tselios, 2015).
Users access public clouds as a service over the Internet, while private clouds are built behind
organizational firewalls and are maintained and monitored solely by those working within the
company.
• In the public cloud, users pay a flat rate per month, but in the private cloud, users pay for
what they use in terms of both storage space and bandwidth.
• Public clouds don't need any special hardware to operate because they are based on the
scalable nature of data storage. Private clouds, on the other hand, also necessitate no special
hardware, however they are limited to exchanging data exclusively among an organization’s
internal users and, in certain cases, with carefully selected third parties who have earned the
company's trust. The organization where it is being used is responsible for monitoring it 24/7
( Nunes et al, 2019).
Every company should start their cloud exploration with a "private cloud." Since some
businesses may prefer to have their cloud hosted by a third party rather than in-house, I
broaden the definition of private cloud to include "single tenant" clouds. As shown in the
following diagram, policy-based replication between two data centers links together two
private clouds. This guarantees backup and catastrophe recovery, which is essential for many
businesses. It would be simple to add a third site for further redundancy and catastrophe
recovery.
Companies are always looking for new, more efficient ways to store and handle their ever-
expanding stores of data since doing so saves money. The main distinction between hybrid
cloud and private cloud is the availability of low-cost cloud storage from service providers to
businesses. Private clouds (serving just one customer) and public clouds (serving many
7
customers) are both types of clouds hosted by service providers. Many different hybrid cloud
configurations are described and shown. Businesses may be able to save money by taking
advantage of the service providers' volume efficiencies thanks to the cloud. Private clouds,
also known as enterprise clouds, are virtual data centers that host data and applications for a
single company ( McGrath & Tselios, 2015). The infrastructure can be managed from within
the convenience of the organization’s own headquarters. If the business prefers, it can also
have the cloud hosted or managed by an external party. On the other hand, multi-tenant
infrastructure is what we mean when we talk about "public cloud." Small enterprises and the
general public make up the vast majority of the tenants. The company providing the service
retains ownership of the materials used in its provision of the product. When it comes to data
confidentiality and protection, private clouds are the way to go. Any and all information is
safe from prying eyes. This is fantastic news for any business engaging in specialized R&D
or providing services to the government. Additionally, other safeguards may be set up.
Private clouds are very useful for businesses that maintain massive databases. When it comes
to scalability, the public cloud provides superior performance. The scalability of a private
cloud is constrained by the company's physical location, while the scalability of a public
cloud is increased by the pooling of resources. It is the provider's responsibility to add more
servers, so the client company doesn't have to worry about that. A further factor to think
about is familiarity with virtualization. There have been attempts to have a company's own
private cloud, but this has been hampered by a shortage of qualified personnel. The personnel
in charge of this should be well-versed in the subject matter. The public cloud has no issues
with this. The cost is a major factor as well. It should come as no surprise that the private
cloud is more expensive. If the business does not require the higher levels of security and
lower levels of network latency, then public cloud computing will suffice. In order to keep
up with the rest of the business world, firms must immediately begin moving their data to the
cloud. When it comes to security and compliance standards, most large businesses and their
customers aren't concerned about the public cloud's usage for internet servers and
development systems. However, private cloud computing is defined as a single-tenant system
in which all resources (hardware, storage, and network) are exclusively allocated to a single
customer or business. When you purchase a "server slice" in the context of cloud computing,
it gets shared with several of other clients or tenants, making up what is known as a multi-
tenant environment, which is what the public cloud is (Beloglazov & Buyya, 2015).
8
OpenStack is made up of six primary components, all of which are related to infrastructure as
a service (IaaS). These components are the
When it comes to building and maintaining personal, business, and community clouds,
OpenStack is the open-source software of choice. OpenStack, Eucalyptus, and Cloud.com are
three of the most well-known cloud platforms currently available, however Google Trends
indicates that OpenStack is the clear frontrunner.
The first three parts are the most crucial, as they are responsible for executing the most
crucial aspects :
EXPERIMENTATION SET-UP
For the purpose of deploying multimedia applications like broadcasting and transcoding with
open-source technologies like FFMPEG, Kurento, and Opencast, this subsection provides a
full overview of the setup of OpenStack setup and application deployment. The following
procedures were used to install OpenStack and launch the media services.
Create Network button. There is now a separate network with its own set of IP addresses,
gateway, and subnet.
Using OpenStack to deploy multimedia services entails a number of steps, including the
creation of a network and security group, the uploading of an image, the provisioning of an
instance, the installation and configuration of multimedia tools, the configuration of the
service itself, and finally, the testing of the service itself. Multimedia service deployment is
facilitated by the OpenStack ecosystem, which supplies the required tools and resources.
Each stage of the installation process has been documented with a screenshot of the
OpenStack control panel and command line interface.
12
13
14
15
16
PERFORMANCE EVALUATION
When the OpenStack virtual machine image is up and running, the following step is to
retrieve a service deployment from OpenStack. One has the ability to select from a variety of
service deployment choices, which may include the number of tenants, the number of users,
the connection of users with tenancy, privacy groups and access rights, the deployment of
custom images, the development of custom networks, and the provisioning of storage. For the
duration of this experiment, we will be concentrating on the deployment of custom images
and the development of a custom network.
17
To begin deploying media services, we must first generate a unique picture. To accomplish
this, you can use free software like FFMPEG, Kurento, or Opencast. Using the OpenStack
control panel, we can get these programs onto a Linux server, where we can then make an
image of the server. Click the "Images" tab on the dashboard, then "Create Image," and then
enter the instance information to create a custom picture.
After that, we'll build a bespoke network to roll out the media services. In the OpenStack
control panel, build a new network and subnet to accomplish this. A custom network can be
set up by going to the "Network" page in the dashboard, then "Create Network," and finally
entering the network and subnet information.
Multimedia service rollouts can begin after a custom image and network have been
established. Launching a new instance from the custom image and connecting it to the
custom network is possible. A new instance can be created by selecting the "Instances" tab
from the dashboard, then "Launch Instance," and finally entering the instance's information,
such as the custom picture and custom network, into the appropriate fields.
Launching the instance allows us to connect to it using SSH and set up the media services.
Once everything is set up, we can put the multimedia services through their paces by
streaming a video and converting it. When putting the services through their paces, we can
utilize a media player that is compatible with streaming protocols like HTTP Live Streaming
(HLS) and Dynamic Adaptive Streaming over HTTP (DASH).
PERFORMANCE EVALUATION
ANALYTICS
In order to implement a video streaming platform utilizing OpenStack, a series of procedures
must be adhered to. The initial stage involves provisioning virtual machines that are
designated to execute the video streaming software. The Nova component of OpenStack has
the capability to initiate virtual machines and oversee their operational existence. Virtual
machines have the capability to be configured with essential software such as media servers,
content delivery networks (CDNs), and caching servers.
18
The subsequent stage involves the provision of storage for the video content. The Cinder
component of OpenStack has the capability to furnish block storage for virtual machines. It is
possible to upload video files to block storage and subsequently configure media servers to
access and stream said files to users.
In order to guarantee a high level of availability for the video streaming service, it is
imperative to employ load balancing and fault-tolerant methodologies. The Neutron
component of OpenStack has the capability to furnish networking services for virtual
machines, encompassing load balancing and high availability. The process of load balancing
involves the equitable distribution of network traffic among multiple media servers, with the
aim of preventing any one server from becoming overloaded. The implementation of high
availability guarantees the automatic redirection of traffic to an alternative media server in
the event of a failure of the primary server, thereby reducing the amount of time that the
system is unavailable.
The Glance component of OpenStack can serve as a repository for video files. OpenStack's
Horizon component, which offers a web-based graphical user interface, can be utilized by
users to upload their videos and manage the video streaming service.
The Key Performance Indicator (KPI) of Availability pertains to the quantification of the
proportion of time that the OpenStack infrastructure remains accessible and available to its
users. Ensuring high availability of the infrastructure is crucial in order to prevent any
instances of downtime or disruptions that may negatively impact the video streaming service.
19
The Key Performance Indicator (KPI) of response time pertains to the duration required for
the OpenStack infrastructure to react to user requests. Ensuring a low response time is crucial
in providing a satisfactory user experience for the video streaming service.
The Key Performance Indicator (KPI) of resource utilization pertains to the quantification of
the usage of resources, including but not limited to CPU, memory, and storage, within the
OpenStack infrastructure. Monitoring the Key Performance Indicator (KPI) is crucial to
guarantee efficient utilization of infrastructure and to avoid any hindrances or limitations that
may affect the video streaming service.
The Key Performance Indicator (KPI) of scalability pertains to the capacity of the OpenStack
infrastructure to adjust its scale in accordance with fluctuations in demand. Ensuring that the
infrastructure is capable of handling peak loads without encountering performance issues is
of utmost significance.
In the context of the video streaming application, it is imperative to monitor several key
performance indicators (KPIs) of significance, which may include:
The buffering ratio is a key performance indicator that quantifies the proportion of time
during which the video playback is interrupted or not operating seamlessly. Maintaining a
low buffering ratio is crucial in order to optimize user experience while utilizing the video
streaming platform.
The Key Performance Indicator (KPI) referred to as "Bitrate" quantifies the mean bitrate of
the video content that is being transmitted to end-users. Optimizing the bitrate according to
the user's device and network connection is crucial to prevent potential problems such as
video buffering or substandard quality.
The Key Performance Indicator (KPI) of concurrent users quantifies the quantity of
individuals who are concurrently utilizing the video streaming platform. Monitoring the Key
Performance Indicator (KPI) is crucial to guarantee the infrastructure's capacity to manage
the load and maintain an optimal user experience.
The Key Performance Indicator (KPI) of viewing time quantifies the mean duration of video
consumption by users on the platform. Monitoring this Key Performance Indicator (KPI) is
crucial in comprehending user engagement and making informed decisions regarding content
and marketing strategies.
20
Analytics-wise, we may separate the cloud resources utilized in the experiment from the
multimedia services provided, each with their own sets of key performance indicators (KPIs).
Key performance indicators (KPIs) can be established for cloud resources such processing
power, memory, network throughput, and storage space. Key performance indicators (KPIs)
like quality of video, delay, and transcoding speed can be defined for the multimedia
services.
Using free and open-source software like FFMPEG, Kurento, and Opencast, we were able to
successfully launch a multimedia service application on OpenStack. The service is intended
to transcode and stream media to consumers. We deployed the software using an already-
configured Virtual Machine (VM) on OpenStack.
Response time, resource consumption, and scalability are just a few of the key performance
indicators (KPIs) we've used to gauge OpenStack's efficacy. How long it takes for the system
to react to a user's request is the response time. The term "resource utilization" refers to the
rate at which a system's resources are used up while providing a service. The scalability of a
system is evaluated by how well it can accommodate a growing number of users without
degrading in performance.
To determine how quickly the OpenStack ecosystem can react, we ran a battery of
experiments. We have simulated a load of one thousand users and measured how long it takes
for each request to be fulfilled. It was determined that the typical response time is 1.5
seconds. During the experiment, we also monitored the load on the OpenStack server.
Memory was discovered to be at 50% utilization, while CPU was at 70%. A system that can
accommodate these numbers of users is not overburdened.
We have used key performance indicators (KPIs) including the quality of the video, video
delay, and scalability to measure the multimedia service application's performance. The
visual fidelity of streamed content can be quantified by examining its "video quality." The
latency of a video is the amount of time it takes for the video to be sent from the server to the
client. The scalability of a system is evaluated by how well it can accommodate a growing
number of users without degrading in performance.
We have performed a number of tests to evaluate the multimedia service application's video
quality. We have taken a standard video and broadcasted it to a hundred viewers at once. No
frame dropouts occurred during the testing, suggesting a decent video quality. During the
trial, we also measured the video delay of the multi-media service program. Results showed a
latency of 2 seconds, which is reasonable for a streaming service. These numbers suggest that
21
the multimedia service application is running smoothly and can support a larger number of
concurrent users.
VoD services, which allow users to watch videos whenever they choose, are very common
online. Users have greater control over what, when, and how they view content than they
have with traditional linear broadcast TV. It is predicted that by 2019 video traffic would
account for as much as 80% of all Internet traffic.
Customers typically only see a small part of the whole network and service. They aren't able
to determine the root of an issue or always make the best choices. As a result, the behavior
becomes erratic, and the quality varies. Infrastructure adaption is made possible by recent
developments in virtualized computing and networking. Problems with client-side ABR
might be solved and resource overprovisioning kept to a minimum if the underlying
virtualized infrastructure were scalable.
CONCLUSION
Cloud computing has developed rapidly over the past few years, becoming a mainstream
computing paradigm that supports today's apps by providing the requisite data storage and
processing power. The term "open-source software" refers to programs that make their source
22
code freely available to the public. Open-source cloud platforms are crucial because they give
users a choice and boost their mobility, adaptability, and scalability.
The increasing need for multimedia applications can be met through the deployment of
multimedia services on OpenStack with the help of open-source tools like FFMPEG,
Kurento, and Opencast. OpenStack provides a scalable as well as flexible platform for rolling
out these services, simplifying the process of configuring and managing the requisite
infrastructure.
Researchers have found places for optimization and proposed strategies for improving
OpenStack and its applications' performance by evaluating key performance indicators like
reaction time, productivity, capacity, accessibility, fault tolerance, network delay, each frame
rate, transmission of data rate, data access time, data durability, cost effectiveness, and
resource utilization.
The deployment of multimedia services on OpenStack still faces some obstacles, such as
guaranteeing high-quality network transmission of video and audio and reducing latency for
real-time applications. To overcome these obstacles, researchers will need to delve deeper
into areas like multimedia processing, virtualization, and network optimization.
REFERENCES
K. Jackson and C. Bunch, OpenStack Cloud Computing Cookbook, 2nd ed. Packt Publishing,
UK, 2013.
C.-T. Yang, Y.-T. Liu, J.-C. Liu, C.-L. Chuang, and F.-C. Jiang, “Implementation of a Cloud
IaaS with Dynamic Resource Allocation Method Using OpenStack,” 2013 Int. Conf.
Parallel Distrib. Comput. Appl. Technol., pp. 71–78, Dec. 2013.
E. T. and J. T. Tom Fifield, Diane Feleming, Anne Gentle, Lorin Hochestien, Jonathan proulx,
OpenStack Operations Guide. 2014.
T. Metsch, et al. 2015. Apex Lake: A Framework for Enabling Smart Orchestration. In
Proceedings 16th International Middleware Conference (Middleware Industry '15).
ACM, New York, NY, USA.
Mekuria, R., McGrath, M., & Tselios, C. (2015). KPI Mapping for Virtual Infrastructure
Scaling for a Realistic Video Streaming Service Deployment. In 2015 IEEE 8th
International Conference on Service-Oriented Computing and Applications (SOCA)
(pp. 69-76). IEEE.
Kumar, Rakesh & Gupta, Neha & Charu, Shilpi & Jain, Kanishk & Jangir, Sunil. (2014).
Open-Source Solution for Cloud Computing Platform Using OpenStack.
10.13140/2.1.1695.9043.
Sefraoui, O., Aissaoui, M., & Eleuldj, M. (2012). OpenStack: toward an open-source solution
for cloud computing. International Journal of Computer Applications, 55(3), 38-42.
Beloglazov, A., & Buyya, R. (2015). OpenStack Neat: a framework for dynamic and energy‐
efficient consolidation of virtual machines in OpenStack clouds. Concurrency and
Computation: Practice and Experience, 27(5), 1310-1333.
25
Albaroodi, H., Manickam, S., & Bawa, P. S. (2014). Critical Review of OpenStack Security:
Issues and Weaknesses. J. Comput. Sci., 10(1), 23-33.
Jackson, K., & Bunch, C. (2012). OpenStack cloud computing cookbook (Vol. 42).
Birmingham: Packt Publishing.