0% found this document useful (0 votes)
88 views

CC Unit - 3

Uploaded by

505 Dedeepya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views

CC Unit - 3

Uploaded by

505 Dedeepya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Unit -3

C ONSIDERATIONS FOR C LOUD APPLICATION DESIGN

DESIGN FOR THE CLOUD

Jane is a developer, and she is working on a software project for a client who is planning to host the project with one of
the cloud computing service providers. So, Jane needs advice on the considerations for cloud applications. Her friend
Teresa, a specialist in the cloud computing environment, is trying to help her.

CLOUD APPLICATIONS, NOT APPLICATIONS IN THE CLOUD


Jane asks, ''Are you saying that I should consider certain points when I design for the cloud environment?''
''Yes, you are right,'' Teresa responds. ''Developing for the cloud is different from developing for the on-premises
environment. The cloud environment has some constraints but offers additional features to enable autoscaling, securing,
enhancing availability, and elevating performance. If you want to benefit from these features, you should plan the
application design for the cloud.''''Please tell me more about that,'' Jane responds.

''Let's start from a simple but essential matter, that is, access to the machine resources in the application. You should be
considering the type of service you are renting from the provider when designing your application. Take a look at Figure-
1 which shows a comparison of the control level over resources for local, Infrastructure as a Service (IaaS), and Platform
as a Service (PaaS) environments. You can see that if you go for a PaaS environment, you will not be able to access the
file system of the machine your application is running on! So, if the client is planning for PaaS, then you should expect to
use a storage service for the files your application generates or uses.''

When designing applications for the cloud, irrespective of the chosen platform, I have often found it useful to consider
four specific topics during my initial discussions; scalability, availability, manageability and feasibility.
It is important to remember that the items presented under each topic within this article are not an exhaustive list and are
aimed only at presenting a starting point for a series of long and detailed conversations with the stakeholders of your
project, always the most important part of the design of any application. The aim of these conversations should be to
produce an initial high-level design and architecture. This is achieved by considering these four key elements holistically
within the domain of the customers project requirements, always remembering to consider the side-effects and trade-offs
of any design decision (i.e. what we gain vs. what we lose, or what we make more difficult).

SCALABILITY
Conversations about scalability should focus on any requirement to add additional capacity to the application and related
services to handle increases in load and demand. It is particularly important to consider each application tier when
designing for scalability, how they should scale individually and how we can avoid contention issues and bottlenecks.
Key areas to consider include:

C APACITY
 Will we need to scale individual application layers and, if so, how can we achieve this without affecting availability?
 How quickly will we need to scale individual services?
 How do we add additional capacity to the application or any part of it?
 Will the application need to run at scale 24x7, or can we scale-down outside business hours or at weekends for
example?

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 1


P LATFORM / DATA
 Can we work within the constraints of our chosen persistence services while working at scale (database size,
transaction throughput, etc.)?
 How can we partition our data to aid scalability within persistence platform constraints (e.g. maximum database
sizes, concurrent request limits, etc.)?
 How can we ensure we are making efficient and effective use of platform resources? As a rule of thumb, I generally
tend towards a design based on many small instances, rather than fewer large ones.
 Can we collapse tiers to minimise internal network traffic and use of resources, whilst maintaining efficient
scalability and future code maintainability?

LOAD
 How can we improve the design to avoid contention issues and bottlenecks? For example, can we use queues or a
service bus between services in a co-operating producer, competing consumer pattern?
 Which operations could be handled asynchronously to help balance load at peak times?
 How could we use the platform features for rate-leveling (e.g. Azure Queues, Service Bus, etc.)?
 How could we use the platform features for load-balancing (e.g. Azure Traffic Manager, Load Balancer, etc.)?

AVAILABILITY
Availability describes the ability of the solution to operate in a manner useful to the consumer in spite of transient and
enduring faults in the application and underlying operating system, network and hardware dependencies. In reality, there
is often some crossover between items useful for availability and scalability. Conversations should cover at least the
following items:
UPTIME GUARANTEES
 What Service Level Agreements (SLA’s) are the products required to meet?
 Can these SLA’s be met? Do the different cloud services we are planning to use all conform to the levels required?
Remember that SLA’s are composite.

R EPLICATION AND FAILOVER


 Which parts of the application are most at risk from failure?
 In which parts of the system would a failure have the most impact?
 Which parts of the application could benefit from redundancy and failover options?
 Will data replication services be required?
 Are we restricted to specific geopolitical areas? If so, are all the services we are planning to use available in those
areas?
 How do we prevent corrupt data from being replicated?
 Will recovery from a failure put excess pressure on the system? Do we need to implement retry policies and/or a
circuit-breaker?

DISASTER RECOVERY
 In the event of a catastrophic failure, how do we rebuild the system?
 How much data, if any, is it acceptable to lose in a disaster recovery scenario?
 How are we handling backups? Do we have a need for backups in addition to data-replication?
 How do we handle “in-flight” messages and queues in the event of a failure?
 Are we idempotent? Can we replay messages?
 Where are we storing our VM images? Do we have a backup?

P ERFORMANCE
 What are the acceptable levels of performance? How can we measure that? What happens if we drop below this
level?
 Can we make any parts of the system asynchronous as an aid to performance?
 Which parts of the system are the mostly highly contended, and therefore more likely to cause performance issues?
 Are we likely to hit traffic spikes which may cause performance issues? Can we auto-scale or use queue-centric
design to cover for this?

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 2


S ECURITY
This is clearly a huge topic in itself, but a few interesting items to explore which relate directly to cloud-computing
include:
 What is the local law and jurisdiction where data is held? Remember to include the countries where failover and
metrics data are held too.
 Is there a requirement for federated security (e.g. ADFS with Azure Active Directory)?
 Is this to be a hybrid-cloud application? How are we securing the link between our corporate and cloud networks?
 How do we control access to the administration portal of the cloud provider?
 How do we restrict access to databases, etc. from other services (e.g. IP Address white-lists, etc.)?
 How do we handle regular password changes?
 How does service-decoupling and multi-tenancy affect security?
 How we will deal with operating system and vendor security patches and updates?

MANAGEABILITY
This topic of conversation covers our ability to understand the health and performance of the live system and manage site
operations. Some useful cloud specific considerations include:

MONITORING
 How are we planning to monitor the application?
 Are we going to use off-the-shelf monitoring services or write our own?
 Where will the monitoring/metrics data be physically stored? Is this in line with data protection policies?
 How much data will our plans for monitoring produce?
 How will we access metrics data and logs? Do we have a plan to make this data useable as volumes increase?
 Is there a requirement for auditing as well as logging?
 Can we afford to lose some metrics/logging/audit data (i.e. can we use an asynchronous design to “fire and forget” to
help aid performance)?
 Will we need to alter the level of monitoring at runtime?
 Do we need automated exception reporting?

DEPLOYMENT
 How do we automate the deployment?
 How do we patch and/or redeploy without disrupting the live system? Can we still meet the SLA’s?
 How do we check that a deployment was successful?
 How do we roll-back an unsuccessful deployment?
 How many environments will we need (e.g. development, test, staging, production) and how will deploy to each of
them?
 Will each environment need separate data storage?
 Will each environment need to be available 24x7?

FEASIBILITY
When discussing feasibility we consider the ability to deliver and maintain the system, within budgetary and time
constraints. Items worth investigating include:
 Can the SLA’s ever be met (i.e. is there a cloud service provider that can give the uptime guarantees that we need to
provide to our customer)?
 Do we have the necessary skills and experience in-house to design and build cloud applications?
 Can we build the application to the design we have within budgetary constraints and a timeframe that makes sense to
the business?
 How much will we need to spend on operational costs (cloud providers often have very complex pricing structures)?
 What can we sensibly reduce (scope, SLAs, resilience)?
 What trade-offs are we willing to accept?

Application design consideration


 Persistence.
 Model-view-controller pattern.
 Statelessness.
 Caching.
 Asynchronous considerations.
 Third-party libraries

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 3


The most important criteria for a cloud application architecture design?
 The system should be architected in such a way that continuous deployment is possible from testing to
staging and then staging to production. The applications should be designed in a modular way so that
continuous deployment is possible.

PLANNING THE APPLICATION FOR THE CLOUD


Jane states, ''I see your point, so how do I plan my application for the cloud?''
Teresa explains, ''There are some concepts that help developers design for the cloud.
1. The composition of the cloud and more specifically virtualization and elasticity.
2. Loose coupling of components (as in the case of web services).
3. Fault tolerance and high availability.
4. Multitenancy.
5. Platform-agnosticism.
6. Performance enhancement.''

T HE CLOUD COMPOSITION
''I know the cloud is based on virtualization technology, but do I have to understand how the technology works in detail?''
Jane asks.
Teresa responds, ''No, you don't have to. However, it is necessary to have a basic idea of the virtualization technology
and rapid elasticity since they enable auto-scaling and dynamically networking resources as the four directions in Figure-
2 demonstrate. Scaling out/in is about increasing/decreasing the number of resources while scaling up/down is about
enhancing/degrading the specs of a resource.''

C LOUD R EFERENCE A RCHITECTURE (CRA)

Before digging into the definition of the Cloud Reference Architecture (CRA) and its benefits, it is better to look at how
things can go wrong without having one. You will quickly realize that it is better to spend some time before migration to
plan your cloud migration journey with security and governance in mind. Doing that will not only save you time and
money but will help you meet your security and governance needs. So let’s get started.

When organizations start planning their cloud migration, and like anything else new, they start by trying and testing some
capabilities. Perhaps they start hosting their development environment in the cloud while keeping their production one
on-premises.

It is also common to see small and isolated applications being migrated first, perhaps because of their size, low criticality
and to give the cloud a chance to prove it is trust worthy. After all, migration to the cloud is a journey and doesn’t happen
overnight.

Then the benefits of cloud solutions became apparent and companies started to migrate multiple large-scale workloads.
As more and more workloads move to the cloud, many organizations find themselves dealing with workload islands that
are managed separately with different security models and independent data flows.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 4


Even worse, with the pressure to quickly get new applications deployed in the cloud with strict deadlines, developers find
themselves rushing to consume new cloud services without reasonable consideration to organization’s security and
governance needs.
The unfortunate result in most cases is to end up with a cloud infrastructure that is hard to manage and maintain. Each
application could end up deployed in a separate island with its own connectivity infrastructure and with poor access
management.

Managing cost of running workloads in the cloud becomes also challenge. There is no clear governance and
accountability model which leads to a lot of management overhead and security concerns.
The lack of  governance, automation, naming convention and security models are also even hard to achieve afterwards. In
fact, it is nightmare to look at a poorly managed cloud infrastructure and then trying to apply security and governance
afterword because these need to be planned a head before even deploying any cloud resources.
Even worse, data can be hosted in geographies that violates corporate’s compliance requirements, which is a big concern
for most organizations. I remember once asking one of my customers if they knew where their cloud data is hosted, and
most of them just don’t know.
T HE B ENEFITS OF C LOUD R EFERENCE A RCHITECTURE (CRA)
Simply put, the Cloud Reference Architecture (CRA) helps organizations address the need for detailed, modular and
current architecture guidance for building solutions in the cloud.
The Cloud Reference Architecture (CRA) serves as a collection of design guidance and design patterns to support
structured approach to deploy services and applications in the cloud. This means that every workload is deployed with
security, governance and compliance in mind from day one.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 5


The ISO/IEC 17789 Cloud Computing Reference Architecture defines four different views for the Cloud Reference
Architecture (CRA) :
 User View
 Functional View
 Implementation View
 Deployment View.
We will be focusing on the Deployment View of the Cloud Reference Architecture (CRA) for now.

The Cloud Reference Architecture (CRA) Deployment View provides a framework to be used for all cloud deployment
projects, which reduces the effort during design and provides an upfront guidance for a deployment aligned to
architecture, security and compliance.
You can think of the Cloud Reference Architecture (CRA) Deployment View as the blueprint for all cloud projects.
What you get from this blueprint, the end goal if you are wondering, is to help you quickly develop and implement cloud-
based solutions, while reducing complexity and risk.
Therefore, having a foundation architecture not only helps you ensure security, manageability and compliance but also
consistency for deploying resources. It includes network, security, management infrastructure, naming convention,
hybrid connectivity and more.
I know what you might be thinking right now, how does one blueprint fit the need for organizations with different sizes?
Since not all organizations are the same, the Cloud Reference Architecture (CRA) Deployment View does not outline a
single design that fits all sizes. Rather, it provides a framework for decisions based on core cloud services, features and
capabilities.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 6


CloudApplication Design Methodologies:

It is a design methodology which has been introduced to manage cloud-based or software as a Service (SaaS) apps. Some
of the key features or applications of this design methodology are as follows:
 Use declarative formats for setup automation, to minimize time and cost for new developers joining the project
 Have a clean contract with the underlying operating system, offering maximum portability between execution
environments
 Are suitable for deployment on modern cloud platforms (Google cloud, Heroku, AWS etc..), obviating the need
for servers and systems administration
 Minimize divergence between development and production, enabling continuous deployment for maximum
agility and can scale up without significant changes to tooling, architecture, or development practices

If we simplify this term further, 12 Factor App design methodology is nothing but a collection of 12 factors which act as
building blocks for deploying or developing an app in the cloud.
Listed below are the 12 Factors:

1. Codebase: A 12 Factor App is always tracked in a version control system such as Git or Apache Subversion
(SVN) in the form of code repository. This will essentially help you to build your code on top of one codebase,
fully backed up with many deployments and revision control. As there is a one to one relationship between a 12
factor app and the codebase repository, so in case of multiple repositories, there is always a need to consider this
relationship as a distributed system consisting of multiple 12 factored apps. Deployments should be automatic,
so everything can run in different environments without

2. Dependencies: As the app is standalone and needs to install dependencies, it is important to explicitly declare
and isolate dependencies. Moreover, it is always recommended to keep your development, production and QA
identical to 12 Factor apps. This will help you to build applications in order to scale web and other such
applications that do not have any room for error. As a solution to this, you can use a dependency isolation tool
such as ex VirtualEnv for Python uniformly to remove several explicit dependency specifications to both
production and development phases and environments.
3. Config: This factor manages the configuration information for the app. Here you store your configuration files
in the environment. This factor focuses on how you store your data – the database Uniform Resource Identifier
(URI) will be different in development, QA and production.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 7


4. Backing Services: This includes backing service management services (local database service or any third party
service) which depends on over a network connection. In case of a 12 factor app, the interface to connect these
services should be defined in a standard way. You need to treat backing services like attached resources because
you may want different databases depending on which team you are working with. Sometimes developers will
want a lot of logs, while QA will want less. With this method, even each developer can have their own config
file

5. Build, Run, Release: It is important to run separately all the build and run stages making sure everything has
the right libraries. For this, you can make use of required automation and tools to generate build and release
packages with proper tags. This is further backed up by running the app in the execution environment while
using proper release management tools like Capistrano for ensuring timely rollback.
6. Stateless Processes: This factor is about making sure the app is executed in the execution environment as one or
more processes. In other words, you want to make sure that all your data is stored in a backing store, which
gives you the right to scale out anything and do what you need to do. During stateless processes, you do not
want to have a state that you need to pass along as you scale up and out.
7. Port Binding: Twelve factor apps are self-contained and do not rely on runtime injection of a web server into
the execution environment to create a web-facing service. With the help of port binding, you can directly access
your app via a port to know if it’s your app or any other point in the stack that is not working properly.

8. Concurrency: This factor looks into the best practices for scaling the app. These practices are used to manage
each process in the app independently i.e. start/stop, clone to different machines etc. The factor also deals with
breaking your app into much smaller pieces and then look for services out there that you either have to write or
can consume.
9. Disposability: Your app might have different multiple processes handling different tasks. So, the ninth factor
looks into the robustness of the app with fast startup and shutdown methods. Disposability is about making sure
your app can startup and takes down fast and can handle any crash anytime. You can use some high quality
robust queuing backend (Beanstalk, RabbitMQ etc.) that would help return unfinished jobs back to the queue in
the case of a failure.
10. Dev/Prod Parity: Development, staging and production should be as similar as possible. In case of continuous
deployment, you need to have continuous integration based on matching environments to limit deviation and
errors

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 8


. Some of the features of keeping the gap between development and production small are as follows:
1. Make the time gap small: a developer may write code and have it deployed hours or even just minutes
later.
2. Make the personnel gap small: developers who wrote code are closely involved in deploying it and
watching its behavior in production.
3. Make the tools gap small: keep development and production as similar as possible.
11. Logs: Logging mechanisms are critical for debugging. Having proper logging mechanisms allows you to output
the log info as a continuous stream rather than managing the entire database of log files. Then, depending on the
configuration, you can decide where that log will publish.
12. Admin Processes: One-off admin processes help in collecting data from the running application. In order to
avoid any synchronization issues, you need to ensure that all these processes are a part of all deploys.

Data Storage Approaches:

DATA STORAGE DEVICES


Storage devices can be broadly classified into two categories:
 Block Storage Devices
 File Storage Devices
 object storage

B LOCK S TORAGE DEVICES


The block storage devices offer raw storage to the clients. These raw storage are partitioned to create volumes.

F ILE STORAGE DEVICES


The file Storage Devices offer storage to clients in the form of files, maintaining its own file system. This storage is in
the form of Network Attached Storage (NAS).

CLOUD STORAGE CLASSES


Cloud storage can be broadly classified into two categories:
 Unmanaged Cloud Storage
 Managed Cloud Storage

UNMANAGED CLOUD STORAGE


Unmanaged cloud storage means the storage is preconfigured for the customer. The customer can neither format, nor
install his own file system or change drive properties.

MANAGED CLOUD STORAGE


Managed cloud storage offers online storage space on-demand. The managed cloud storage system appears to the user to
be a raw disk that the user can partition and format.
CREATING CLOUD STORAGE SYSTEM

The cloud storage system stores multiple copies of data on multiple servers, at multiple locations. If one system fails,
then it is required only to change the pointer to the location, where the object is stored.
To aggregate the storage assets into cloud storage systems, the cloud provider can use storage virtualization software
known as StorageGRID. It creates a virtualization layer that fetches storage from different storage devices into a single
management system. It can also manage data from CIFS and NFS file systems over the Internet. The following diagram
shows how StorageGRID virtualizes the storage into storage clouds:

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 9


VIRTUAL STORAGE CONTAINERS
The virtual storage containers offer high performance cloud storage systems. Logical Unit Number (LUN) of device,
files and other objects are created in virtual storage containers. Following diagram shows a virtual storage container,
defining a cloud storage domain:

CHALLENGES
Storing the data in cloud is not that simple task. Apart from its flexibility and convenience, it also has several challenges
faced by the customers. The customers must be able to:
 Get provision for additional storage on-demand.
 Know and restrict the physical location of the stored data.
 Verify how data was erased.
 Have access to a documented process for disposing of data storage hardware.
 Have administrator access control over data.

Storage Systems in the Cloud :


There are 3 types of storage systems in the Cloud as follows.
 Block-Based Storage System
 File-Based Storage System
 Object-Based Storage System

Type-1 :
Block-Based Storage System –
 Hard drives are block-based storage systems. Your operating system like Windows or Linux actually sees a hard
disk drive. So, it sees a drive on which you can create a volume, and then you can partition that volume and
format them.
 For example, If a system has 1000 GB of volume, then we can partition it into 800 GB and 200 GB for local C
and local D drive respectively.
 Remember with a block-based storage system, your computer would see a drive, and then you can create
volumes and partitions
Type-2 :
File-Based Storage System –
 In this, you are actually connecting through a Network Interface Card (NIC) . You are going over a network, and
then you can access the network-attached storage server (NAS). NAS devices are file-based storage systems.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 10


 This storage server is another computing device that has another disk in it. It is already created a file system so
that it’s already formatted its partitions, and it will share its file systems over the network. Here, you can
actually map the drive to its network location.
 In this, like the previous one, there is no need to partition and format the volume by the user. It’s already done
in file-based storage systems. So, the operating system sees a file system that is mapped to a local drive letter.
Type-3 :
Object-Based Storage System –
 In this, a user uploads objects using a web browser and uploading an object to a container i.e, Object Storage
Container. This uses the HTTP Protocols with the rest of the APIs (example: GET, PUT, POST, SELECT,
DELETE).
 For example, when you connect to any website, and you need to download some images, text, or anything that
the website contains. For that, it is a code HTTP GET request. If you want to review any product then you can
use PUT and POST requests.
 Also, there is no hierarchy of objects in the container. Every file is on the same level in an Object-Based storage
system.
Advantages:
 Scalability – 
Capacity and storage can be expanded and performance can be enhanced.
 
 Flexibility – 
Data can be manipulated and scaled according to the rules.
 
 Simpler Data Migrations – 
As it can add and remove the new and old data when required eliminates disruptive data migrations.
Disadvantages:
 Data centers require electricity and proper internet facility to operate their work, failing in which system will not
work properly.

Multimedia Introduction:
Multimedia:
Multimedia is usually recorded and played, displayed, or accessed by information content processing devices, such
as computerized and electronic devices, but can also be part of a live performance. Multimedia devices are electronic
media devices used to store and experience multimedia content.

Streaming Media:
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a
provider.
Its verb form, "to stream", refers to the process of delivering media in this manner; the term refers to the
delivery method of the medium rather than the medium itself .

Types of streaming media are Live Streaming, Video Streaming

Live Streaming refers to content delivered live over the Internet, requires a camera for the media, an encoder to digitize
the content, a media publisher, and a content delivery network to distribute and deliver the content. In streaming video
and audio, the traveling information is a stream of data from a server.

The decoder is a stand-alone player or a plugin that works as part of a Web browser. The server, information stream
and decoder work together to let people watch live or prerecorded broadcasts.

Most streaming videos don't fill the whole screen on a computer. Instead, they play in a smaller frame or window.

If you stretch many streaming videos to fill your screen, you'll see a drop in quality. For this reason, streaming video and
audio use protocols that allow the transfer of data in real time.

They break files into very small pieces and send them to a specific location in a

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 11


specific order. These protocols include:
1. Real-time transfer protocol (RTP)
2. Real-time streaming protocol (RTSP)
3. Real-time transport control protocol (RTCP)

Users expect powerful and stable functions for multimedia videos stability is of greatest importance.
To provide appropriate multimedia files for diversified terminal units.

In the transcoding mode, multimedia files are transcoded dynamically, in order to be applicable to the device side
according to the terminal environment.
This mode needs to consider the real-time problem, especially for H.264/SVC coding, as the time required for
transcoding causes difficulties for real-time streaming. Although SVC is applicable to varying bandwidth networks due to
its multilayer architecture, how to provide a multimedia hierarchy that is suitable for dynamic environment variations
according to the terminal unit is an interesting research project.

Basic Modules of the Video Transcoding System

Transmission profile is responsible for monitoring the dynamic condition of the transmission channel, such as
effective channel bandwidth, channel error rate, etc.
Device profile describes the capability of the device, such as screen size, processing power, etc.

User profile describes the user preference.

Here the svc transcoding controller transcodes the appropriate video file according to
the mobile device parameters.
based on cloud computing introduces the new concept that uses map reduce to separate the video content into different
clips.

The losses in bandwidth can be reduced by this approach. svc plays an important role in this method and
also provides the different formats of the video files with the cloud environment to simulate the overall network
environment.
In this schema user wants to download a particular file from the cloud server first the user should register with cloud
server, if already exists the device profile then directly get the appropriate video file according
to the parameters of the mobile device.

Any mobile device using this service for the first time the cloud will not provide such a profile. so there should be an
additional profile examination to provide the necessary information about the mobile device.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 12


Through this functionality the mobile device can generate a schema and send to the profile agent.
The profile agent determines the required parameters and then sends to the NDAMM for identification.
It determines the user profile of the mobile device and then sends to the svc transcoding controller for increasing the
efficiency of the video streaming according to the parameters of the mobile device.

Case Study: Performance Evaluation


This Concept mainly tells about how to provide different renditions of the video file while request being sent by the
mobile device for varying bandwidth networks. Here we maintain the multiple renditions of single source file in
cloud server according to the network device we send the appropriate video files

The proposed system has the following modules:


 Parameter Calculation
 Cloud Service
 Bandwidth settings
 Transcoding
 Adaptive Streaming

Structure of Cloud based Mobile Streaming

A. Parameter Calculation For parameter calculation we can send the mobile device in formation in the form XML
manifest file. In this we can set the network parameters according to the device.

There are three types of bandwidths namely existing, average and standard deviation values to calculate the current
bandwidth. This type of parameter form is maintained then we can send the device parameters to the network
estimation module and device aware Bayesian prediction module for relevant prediction. Here in the xml manifest
file we can set the mobile network parameters and the send to the Cloud server then it detects by the server and send
the appropriate Video file to the mobile device according to the parameters of the mobile device

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 13


Fig. 3. Android XML Manifest File

B. Cloud Service
In this module we can maintain different video formats of the single video file then we can store on to the cloud
database. Whenever the mobile device sends the parameters to the cloud server the server can detect the user
profiles of the particular mobile device and can send the required video file to the mobile device. The use highly
scalable, reliable, secure, fast and inexpensive infrastructure.
This service mainly aims to maximize the benefits of the user. The video file was stored according to
the bitrates, resolution and frame rate, bandwidth of the width, height, and standard deviation, decoding and
encoding formats.

C. Bandwidth Settings
In this module we can estimate the video process according to the frame rate, bit rate and resolution. To decode or
encode the video file according to the parameters of the mobile device. Device feature can find by power
consumption, device model, device network in order to conform to the real time requirements of the mobile device.
In order to conform to the real time requirements of the mobile multimedia, this study adopted Bayesian theory to
infer weather the video features are conformed to the decoding action. the inference module was based on the
following two conditions: the LCD brightness does not always change this hypothesis aims at a hardware energy
evaluation. The literature states that the TFT LCD energy consumption accounts for about 20%-45% of the total
power consumption for different terminal hardware environments. Although the overall power can be reduced
effectively by adjusting the LCD, with multimedia services, users are sensitive to brightness; they dislike
video brightness that repeatedly changes.

D. Transcoding
In this module the transcoding work can be done by the cloud server efficiently according to the bandwidth and

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 14


network environment. Scalable video coding is an improvement over traditional H.264/MPEG-4 AVC coding,
as it has higher coding flexibility. It is characterized by temporal scalability, spatial scalability and SNR scalability,
allowing video transmissions to be more adaptable to heterogeneous network bandwidth. Transcoding can be
done by the cloud server instantly in this service and moreover this method is very convenient for users to get
more benefits.

E. Adaptive Streaming

Adaptive Video Streaming

A good dynamic communication mechanism can reduce the bandwidth needs and the power consumption of the
device resulting from excessive packet transmission, and transmission frequency can be determined according to the
bandwidth based on such dynamic decision making when the network bandwidth difference exceeds a triple standard
deviation, this indicates the present network is unstable. The overall communication frequency shall incline to
frequency to avoid errors; however, when the network bandwidth difference is less than a triple standard
deviation, the current network is still in a stable state, and the influence on bandwidth difference can be corrected
gradually. Ultimately we will get the service very effectively in this method.
Case Study: Live Video Streaming App,

A proof of concept (POC) is an early product version between the design and main development phases. POC has
become a common way for startups and established businesses to test whether the idea they have is actually going to
work since POC demonstrates that the project can be done. A proof of concept also creates a starting point for the
development of the project as a whole.
Businesses need a proof of concept when there is no guarantee that the technical result is achievable due to the leverage
of complex architecture and new technologies, what's relevant to video streaming mobile applications. Thus, by
developing a POC, developers, and stakeholders get evidence that the project is viable.
OUR CHALLENGE
Develop the proof of concept of a video streaming application with the basic functionality of a social media application.
To achieve this goal, we needed to: 
Implement the following mobile screens and use cases: 
 Sign up / Sign in. Users can sign up/sign in to the system.
 View profile. Users can view their and other users’ profile data. 
 Edit profile. Users can edit their profile data, such as name, avatar, and bio. 
 Search. Users can search for other users by name and follow them. 
 Start streaming.  Users can start real-time video streaming.
 View streamings list. Users can view the list of active streams. 
 Join the stream. Users can participate in the streaming of another user as a viewer. 
Integrate several authorization methods, such as: 

 Email and Password

 Google authorization

 Facebook authorization

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 15


 Apple authorization
OUR SOLUTION - VIDEO STREAMING APP PROOF OF CONCEPT  
We developed a proof of concept of a video streaming application with the basic functionality of a social media app to
show off our tech expertise in live broadcasting and demonstrate how such a project may look. 
Implemented features:

 Sign-in/Sign-up via email and password, Facebook, Google, and Apple ID.  

 User Profile 

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 16


 Search for followers, follow and unfollow functionality 

 View the list of active video streams 

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 17


 Broadcasting videos to subscribers and receiving reactions 

HIGH-LEVEL ARCHITECTURE VISION  

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 18


T ECH STACK
 Swift for iOS application
 Firebase Real-time BD supports direct connectivity from mobile and web platforms as well as backend
applications
 Firebase for user authentication and authorization, data and image storing
 Google Cloud Platform for hosting app’s back-end 
 Python for application’s back-end 
 Agora.IO, a SaaS for video broadcasting and participating in video streaming

CONTRIBUTORS
 iOS developer Vitalii Pryidun 
 Backend developer Ihor Kopaniev 
 DevOps support Vasily Lebediev

HOW WE DEVELOPED A STREAMING APP PROOF OF CONCEPT


CORE
We built the app's POC using MVP+Router+Configurator architecture, including MVVM+Combine for lists, etc. 
We made DI using ServiceLocator Singleton, which is a factory of abstract services.

Main services 
 Keychain for saving JWT and Apple sign-in credentials.
 Network, AuthorizedNetwork, TokenProvider, APIErrorParser for executing network requests. All requests have
to conform to APIRequestProtocol or APIAuthorizedRequestProtocol for requests that include a token
into headers. 
 TokenProvider for fetching a token from the keychain and refreshing it via Firebase if needed. If your app has to
refresh a token using a backend request, go to Core/Networking/TokenProvider and rewrite this service to
restore the token manually. 
 FirebaseManager for authentication using email+password, verification ID, social media, password reset, logout,
etc.
 FirebaseDatabaseManager for obtaining followers list, fetching users, etc.
 FirebaseStorage for setting and fetching an avatar.
 AuthService is just a stub for validating Firebase JWT tokens. If your back-end requires JWT verification, insert
a validation request into the validate method.
 SearchService for fetching users with input from the search field.
 FollowService for following/unfollowing the user fetched with SearchService.
 UserService for updating user profile (name etc.).
 StreamService for fetching a token to join agora channel, notifying back-end about start/end of the channel,
subscribing to user reactions, sending reactions, etc.

OUR RESULTS 
The development of the video streaming app proof of concept gave us the following expertise:
 We integrated video streaming functionality to the POC using Agora.IO SaaS.
 We implemented the authentication and authorization by Firebase Authentication.
 We worked with Firebase Realtime Database, which supports direct connectivity with end-users applications
(like mobile, web, etc.) server-side applications.
 We optimized the development process by applying ready-to-use Firebase functionality.
 As a result, we showcased our expertise in video streaming app development. 

Case Study: Streaming Protocols

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 19


PROTOCOL?
A protocol is a set of rules governing how data travels from one communicating system to another. These are layered on
top of one another to form a protocol stack. That way, protocols at each layer can focus on a specific function and
cooperate with each other. The lowest layer acts as a foundation, and each layer above it adds complexity.
You’ve likely heard of an IP address, which stands for Internet Protocol. This protocol structures how devices using the
internet communicate. The Internet Protocol sits at the network layer. It’s typically overlaid by the Transmission Control
Protocol (TCP) at the transport layer, as well as the Hypertext Transfer Protocol (HTTP) at the application layer.

The seven layers — which include physical, data link, network, transport, session, presentation, and application — were
defined by the International Organization for Standardization’s (IS0’s) Open Systems Interconnection model, as depicted
above.
 
WHAT IS A STREAMING PROTOCOL?
Each time you watch a live stream or video on demand, video streaming protocols are used to deliver data over the
internet. These can sit in the application, presentation, and session layers.
Online video delivery uses both streaming protocols and HTTP-based protocols. Streaming protocols like Real-Time
Messaging Protocol (RTMP) transport video using dedicated streaming servers, whereas HTTP-based protocols rely on
regular web servers to optimize the viewing experience and quickly scale. Finally, emerging HTTP-based
technologies like Apple’s Low-Latency HLS seek to deliver the best of both options by supporting low-latency streaming
at scale.

UDP VS. TCP: A QUICK BACKGROUND


User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) are both core components of the internet
protocol suite, residing in the transport layer. The protocols used for streaming sit on top of these. UDP and TCP differ in
terms of quality and speed, so it’s worth taking a closer look.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 20


Adapted from https://microchipdeveloper.com/tcpip:tcp-vs-udp
The primary difference between UDP and TCP hinges on the fact that TCP requires a three-way handshake when
transporting data. The initiator (client) asks the accepter (server) to start a connection, the accepter responds, and the
initiator acknowledges the response and maintains a session between either end. For this reason, TCP is quite reliable and
can solve for packet loss and ordering. UDP, on the other hand, starts without requiring any handshake. It transports data
regardless of any bandwidth constrains, making it speedier and riskier. Because UDP doesn’t support retransmissions,
packet ordering, or error-checking, there’s potential for a network glitch to corrupt the data en route.
Protocols like Secure Reliable Transport (SRT) often use UDP, whereas protocols like HTTP Live Streaming (HLS) use
TCP.

UDP vs. TCP Deep Dive


  
WHAT ARE THE MOST COMMON PROTOCOLS FOR VIDEO STREAMING?
V IDEO S TREAMING PROTOCOLS COMPARISON IN 2022

 Real-Time Messaging Protocol (RTMP)


 Real-Time Streaming Protocol (RTSP)
 HTTP Live Streaming (HLS)
 Low-Latency HLS
 Dynamic Adaptive Streaming over HTTP (MPEG-DASH)
 Low-Latency CMAF for DASH
 Microsoft Smooth Streaming
 Adobe HDS (HTTP Dynamic Streaming)
 SRT (Secure Reliable Transport)
 WebRTC (Web Real-Time Communications)

 
TRADITIONAL VIDEO STREAMING PROTOCOLS
Traditional streaming protocols, such as RTSP and RTMP, support low-latency streaming. But they aren’t natively
supported on most endpoints (e.g., browsers, mobile devices, computers, and televisions). Today, these streaming formats
work best for transporting video between and IP camera or encoder and a dedicated media server.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 21


   
As shown above, RTMP delivers video at roughly the same pace as a cable broadcast — in just over five seconds.
RTSP/RTP is even quicker at around two seconds. Both formats achieve such speed by transmitting the data using a
firehose approach rather than requiring local download or caching. But because very few players support RTMP and
RTSP, they aren’t optimized for great viewing experiences at scale. Many broadcasters choose to transport live streams to
the media server using a stateful protocol like RTMP. From there, they can transcode it into an HTTP-based technology
for multi-device delivery.
 
ADOBE RTMP
Adobe designed the RTMP specification at the dawn of streaming. The protocol could transport audio and video data
between a dedicated streaming server and the Adobe Flash Player. Reliable and efficient, this worked great for live
streaming. But open standards and adaptive bitrate streaming eventually edged RTMP out. The writing on the wall came
when Adobe announced the death of Flash — which officially ended in 2020.
While Flash’s end-of-life date was overdue, the same cannot be said for using RTMP for video contribution. RTMP
encoders are still a go-to for many content producers, even though the proprietary protocol has fallen out of favor for last-
mile delivery.
In fact, in our 2021 Video Streaming Latency Report, more than 76% of content distributors indicated they use
RTMP for ingest.
Which streaming formats are you currently using for ingest?

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 22


 Video Codecs: H.264, VP8, VP6, Sorenson Spark®, Screen Video v1 & v2
 Audio Codecs: AAC, AAC-LC, HE-AAC+ v1 & v2, MP3, Speex, Opus, Vorbis
 Playback Compatibility: Not widely supported (Flash Player, Adobe AIR, RTMP-compatible players)
 Benefits: Low-latency and requires no buffering
 Drawbacks: Not optimized for quality of experience or scalability
 Latency: 5 seconds
 Variant Formats: RTMPT (tunneled through HTTP), RTMPE (encrypted), RTMPTE (tunneled and encrypted),
RTMPS (encrypted over SSL), RTMFP (travels over UDP instead of TCP)

RTMP Explained
 
RTSP/RTP
Like RTMP, RTSP/RTP describes an old-school technology used for video contribution. RTSP and RTP are often used
interchangeably. But to be clear: RTSP is a presentation-layer protocol that lets end users command media servers via
pause and play capabilities, whereas RTP is the transport protocol used to move said data.
Android and iOS devices don’t have RTSP-compatible players out of the box, making this another protocol that’s rarely
used for playback. That said, RTSP remains standard in many surveillance and closed-circuit television (CCTV)
architectures. Why? The reason is simple. RTSP support is still ubiquitous in IP cameras.

   

 Video Codecs: H.265 (preview), H.264, VP9, VP8


 Audio Codecs: AAC, AAC-LC, HE-AAC+ v1 & v2, MP3, Speex, Opus, Vorbis
 Playback Compatibility: Not widely supported (Quicktime Player and other RTSP/RTP-compliant players,
VideoLAN VLC media player, 3Gpp-compatible mobile devices)
 Benefits: Low-latency and supported by most IP cameras
 Drawbacks: No longer used for video delivery to end users
 Latency: 2 seconds
 Variant Formats: The entire stack of RTP, RTCP (Real-Time Control Protocol), and RTSP is often referred to
as RTSP

RTSP Explained
 
ADAPTIVE HTTP-BASED STREAMING PROTOCOLS
Streams deployed over HTTP are not technically “streams.” Rather, they’re progressive downloads sent via regular web
servers. Using adaptive bitrate streaming, HTTP-based protocols deliver the best video quality and viewer experience
possible — no matter the connection, software, or device. Some of the most common HTTP-based protocols include
MPEG-DASH and Apple’s HLS.
 
APPLE HLS
Since Apple is a major player in the world of internet-connected devices, it follows that Apple’s HLS protocol rules the
digital video landscape. For one, the protocol supports adaptive bitrate streaming, which is key to viewer experience.
More importantly, a stream delivered via HLS will play back on the majority of devices — thereby ensuring accessibility
to a large audience.
HLS support was initially limited to iOS devices such as iPhones and iPads, but native support has since been added to a
wide range of platforms. All Google Chrome browsers, as well as Android, Linux, Microsoft, and MacOS devices, can
play streams delivered using HLS.
 

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 23


NEVER MISS AN HLS UPDATE
Subscribe to keep up with all the live streaming news from protocols to the latest trends 

 Video Codecs: H.265, H.264


 Audio Codecs: AAC-LC, HE-AAC+ v1 & v2, xHE-AAC, Apple Lossless, FLAC
 Playback Compatibility: Great (All Google Chrome browsers; Android, Linux, Microsoft, and MacOS
devices; several set-top boxes, smart TVs, and other players)
 Benefits: Adaptive bitrate and widely supported
 Drawbacks: Quality of experience is prioritized over low latency
 Latency: 6-30 seconds (lower latency only possible when tuned)
 Variant Formats: Low-Latency HLS (see below), PHLS (Protected HTTP Live Streaming)

HLS Explained
LOW-LATENCY HLS
Low-Latency HLS (LL-HLS) is the latest and greatest technology when it comes to low-latency streaming. The
proprietary protocol promises to deliver sub-three-second streams globally. It also offers backward compatibility to
existing clients.
In other words, it’s designed to deliver the same simplicity, scalability, and quality as HLS — while significantly
shrinking the latency. At Wowza, we call this combination the streaming trifecta.
Even so, successful deployments of Low-Latency HLS require integration from vendors across the video delivery
ecosystem. Support is still lacking, and large-scale deployments of Low-Latency HLS are few and far between.

 Playback Compatibility: Any players that aren’t optimized for Low-Latency HLS can fall back to standard
(higher-latency) HLS behavior
 HLS-compatible devices include MacOS, Microsoft, Android, and Linux devices; all Google Chrome
browsers; several set-top boxes, smart TVs, and other players
 Benefits: Low latency, scalability, and high quality… Oh, and did we mention backward compatibility?
 Drawbacks: As an emerging spec, vendors are still implementing support
 Latency: 2 seconds or less

More About Low-Latency HLS


 
MPEG-DASH
MPEG-DASH is a vendor-independent alternative to HLS. Basically, with DASH you get a non-proprietary option that
ensures the same scalability and quality. But because Apple tends to prioritize its own tech stack, support for DASH
plays second fiddle in the slew of Apple devices out there.

   

 Video Codecs: Codec-agnostic
 Audio Codecs: Codec-agnostic
 Playback Compatibility: Good (All Android devices; most post-2012 Samsung, Philips, Panasonic, and Sony
TVs; Chrome, Safari, and Firefox browsers)
 Benefits: Vendor-independent, international standard for adaptive bitrate
 Drawbacks: Not supported by iOS or Apple TV
 Latency: 6-30 seconds (lower latency only possible when tuned)
 Variant Formats: MPEG-DASH CENC (Common Encryption)

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 24


MPEG-DASH Explained
 
LOW-LATENCY CMAF FOR DASH
Low-latency CMAF for DASH is another emerging technology for speeding up HTTP-based video delivery. Although
it’s still in its infancy, the technology shows promise in delivering superfast video at scale by using shorter data
segments. That said, many vendors have prioritized support for Low-Latency HLS over that of low-latency CMAF for
DASH.

 Playback Compatibility: Any players that aren’t optimized for low-latency CMAF for DASH can fall back to
standard (higher-latency) DASH behavior
 Benefits: Low latency meets HTTP-based streaming
 Drawbacks: As an emerging spec, vendors are still implementing support
 Latency: 3 seconds or less

CMAF Explained
MICROSOFT SMOOTH STREAMING
Microsoft developed Microsoft Smooth Streaming in 2008 for use with Silverlight player applications. It enables
adaptive delivery to all Microsoft devices. The protocol can’t compete with other HTTP-based formats and is falling out
of use. In fact, in our 2021 Video Streaming Latency Report, only 5 percent of respondents were using Smooth
Streaming.
Which streaming formats are you currently using?

 Video Codecs: H.264, VC-1


 Audio Codecs: AAC, MP3, WMA
 Playback Compatibility: Good (Microsoft and iOS devices, Xbox, many smart TVs, Silverlight player-enabled
browsers)
 Benefits: Adaptive bitrate and supported by iOS
 Drawbacks: Proprietary technology, doesn’t compete with HLS and DASH
 Latency: 6-30 seconds (lower latency only possible when tuned)

ADOBE HDS
HDS was developed for use with Flash Player as the first adaptive bitrate protocol. Because Flash is no more, it’s also
slowly dying. Don’t believe us? Just take a look at the graph above.

 Video Codecs: H.264, VP6


 Audio Codecs: AAC, MP3
 Playback Compatibility: Not widely supported (Flash Player, Adobe AIR)
 Benefits: Adaptive bitrate technology for Flash
 Drawbacks: Proprietary technology with lacking support
 Latency: 6-30 seconds (lower latency only possible when tuned)

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 25


 
NEW TECHNOLOGIES
Last but not least, new technologies like WebRTC and SRT promise to change the landscape. Similar to low-latency
CMAF for DASH and Apple Low-Latency HLS, these protocols were designed with latency in mind.
 
SRT
This open-source protocol is recognized as a proven alternative to proprietary transport technologies — helping to deliver
reliable streams, regardless of network quality. It competes directly with RTMP and RTSP as a first-mile solution, but
it’s still being adopted as encoders, decoders, and players add support.
From recovering lost packets to preserving timing behavior, SRT was designed to solve the challenges of video
contribution and distribution across the public internet. And it’s quickly taking the industry by storm. One interactive use
case for which SRT proved instrumental was the 2020 virtual NFL draft.  The NFL used this game-changing technology
to connect 600 live feeds for the first entirely virtual event.

 Video Codecs: Codec-agnostic
 Audio Codecs: Codec-agnostic
 Playback Compatibility: Limited (VLC Media Player, FFPlay, Haivision Play Pro, Haivision Play, Larix
Player, Brightcove)
 Benefits: High-quality, low-latency video over suboptimal networks
 Drawbacks: Not widely supported for video playback
 Latency: 3 seconds or less, tunable based on how much latency you want to trade for packet loss

SRT Explained
 
WEBRTC
As the speediest technology available, WebRTC delivers near-instantaneous voice and video streaming to and from any
major browser. It can also be used end-to-end and thus competes with ingest and delivery protocols. The framework was
designed for pure chat-based applications, but it’s now finding its way into more diverse use cases.
Scalability remains a challenge with WebRTC, though, so you’ll need to use a solution like Wowza’s Real-Time
Streaming at Scale feature to overcome this. The solution deploys WebRTC across a custom CDN to provide near-
limitless scale. This allows broadcasters to reach a million viewers with sub-500 ms delivery — a once impossible feat.
Workflow: Real-Time Streaming at Scale for Wowza Video

 Video Codecs: H.264, VP8, VP9


 Audio Codecs: Opus, iSAC, iLBC
 Playback Compatibility: Chrome, Firefox, and Safari support WebRTC without any plugin
 Benefits: Super fast and browser-based
 Drawbacks: Designed for video conferencing and not scale
 Latency: Sub-500-millisecond delivery

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 26


WebRTC Explained
 
CONSIDERATIONS WHEN CHOOSING A STREAMING PROTOCOL
Selecting the right media streaming protocol starts with defining what you’re trying to achieve. Latency, playback
compatibility, and viewing experience can all be impacted. What’s more, content distributors don’t always stick with the
same protocol from capture to playback. Many broadcasters use RTMP to get from the encoder to server and
then transcode the stream into an adaptive HTTP-based format.

Media streaming protocols differ in the following areas:

 First-mile contribution vs. last-mile delivery


 Playback support
 Encoder support
 Scalability
 Latency
 Quality of experience (adaptive bitrate enabled, etc.)
 Security

By prioritizing the above considerations, it’s easy to narrow down what’s best for you.
 
CONTRIBUTION VS. D ELIVERY
RTMP and SRT are great bets for first-mile contribution, while both DASH and HLS lead the way when it comes
to playback. On the flip side, RTMP has fallen entirely out of favor for delivery, and HLS isn’t an ideal ingest format.
That’s why most content distributors rely on a media server or cloud-based video platform to transcode their content from
one protocol to another.
 
P LAYBACK SUPPORT
What’s the point of distributing a stream if viewers can’t access it? Lacking playback support is the reason RTMP no
longer plays a role in delivery to end users. And ubiquitous playback support is the reason why HLS is the most popular
protocol today.
 
E NCODER S UPPORT
The inverse of playback support is encoder support. RTMP maintains a stronghold despite its many flaws due to the
prevalence of RTMP encoders already out there. Similarly, RTSP has stayed relevant in the surveillance industry because
it’s the protocol of choice for IP cameras.
WebRTC is unique in that it can be used for browser-based publishing and playback without any additional technologies,
enabling simple streaming for use cases that don’t require production-quality encoders and cameras.
 
S CALABILITY
HLS is synonymous with scalability. The widely-supported HTTP-based protocol leverages web servers to reach any
device worth reaching today. But what it delivers in scalability, it lacks in terms of latency. That’s because latency and
scalability have traditionally been at odds with one another. New technologies like Real-Time Streaming at Scale,
however, resolve this polarity.
 

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 27


LATENCY
Low-Latency HLS, low-latency CMAF for DASH, and WebRTC were all designed with speedy delivery in mind.
Anyone deploying interactive video environments should limit themselves to one of these three delivery protocols.
Case Study: Video Transcoding APP.

VIDEO FILE?
To discuss the problems surrounding compatibility and bandwidth, we must first explain what a video file is.
When a video is first recorded by a phone or camera, the raw video data is too large to store locally, much less send over
the web. For example, an hour of 1080p 60fps uncompressed video is around 1.3 terabytes.
To bring the size of this raw data down to a more manageable size, the data must be compressed. The software that is
used to compress this data is called a codec (a combination of the words coder and decoder). A codec applies an
algorithm to compress video data, encoding it so that it can be easily stored and sent. Once compressed, the data is
packaged into a file format, called a container. Containers have extensions you may have seen, like .mp4 or .mov.
When playing a video this process is reversed. A media player opens the container, and the same codec is used
to decode the video data and display it on the device.

Encoded videos are decoded with the same codec for playback

THE PROBLEM OF COMPATIBILITY


The first problem that businesses face is that there are dozens of different codecs, containers, and video players, each
with their own strengths and weaknesses. Unfortunately, they are not all compatible with each other. Using the wrong
codec or container could mean that some users can’t play certain videos on their device. Therefore a decision must be
made as to which codec and container will be used to package a video.

Usually this decision will be made based on the characteristics of the codec and the types of devices or media players that
a business expects their users to have. Once they have made this decision, they need to convert any video files they have
using the codec and container they have decided on.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 28


Businesses select a codec and container format to optimize compatibility
However, now they need to answer a second question: how much should they compress their videos?

THE PROBLEM OF BANDWIDTH


Generally speaking, there is a negative correlation between the level of compression and the quality of the resultant
video. The more a video file is compressed, the greater the decrease in visual fidelity. This means that the less a video is
compressed, the more of its original quality is preserved and the larger the file size. However, not all users will have the
bandwidth to quickly download larger, higher quality files.

Consider for instance, the difference in download speed that will be available to a user on a fiber internet connection in
their office, and a user on 3G mobile connection going through a subway tunnel as they both attempt to download the
same video. The person in their office will have a smooth experience, whereas the person in the tunnel may have a
choppy experience, if their video plays at all.
For this reason, businesses will usually create multiple versions of the same video at different rates of compression, and
thus different file sizes. Modern media players detect users’ bandwidth, and deliver the video file most appropriate for
speed of their connection. Smaller, more compressed videos will be delivered to users with less available bandwidth,
while users with stronger connections will be served higher quality videos.

V IDEO TRANSCODING AND IT' S INHERENT CHALLENGES

To recap, businesses that deliver video will use a codec to convert their videos into a single container format,
compressed to multiple file sizes.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 29


This process of conversion is called video transcoding. The process of ingesting a video file and transcoding it to
specific formats and file sizes is coordinated and facilitated by a transcoding pipeline.
Every business that delivers video files on the internet will consider how they are going to handle this process of
transcoding their videos. Unfortunately, the video transcoding process possesses its own inherent challenges that need to
be addressed when negotiating a transcoding solution.
First, transcoding large video files takes a very long time. Transcoding a single 60 minute HD video can take anywhere
from two to six hours, and sometimes more.
Second, transcoding is demanding on memory and CPU. A video transcoding process will happily eat all the CPU and
memory that are thrown at it.
Transcoding on a single local machine may be a viable option for individuals that only transcode a few videos per month.
However, businesses that have regular video demand will not find this to be a feasible option, and will generally choose
between one of two professional transcoding solutions.
2.5 E XISTING VIDEO TRANSCODING SOLUTIONS
Broadly considered, professional video transcoding solutions will fall into two categories: customized solutions that are
developed and maintained in-house and third party commercial software
Custom transcoding

Some businesses choose to build their own video transcoding farm in-house. In this case, a development team is needed
to build custom transcoding pipeline and deploy it to bare metal servers or cloud infrastructure (e.g. Amazon EC2, Digital
Ocean Droplet).
Once deployed, this option is cheaper than third-party software, and it provides businesses with maximum control over
what formats, codecs, and video settings they support, as well as any additional transformations they want to make to
their videos.
This option requires a high amount of technical expertise both to develop and scale effectively: video transcoding can be
both slow and error prone without significant engineering attention. As we’ll see later, provisioning servers for video
transcoding can also result in periods wherein compute power is going unused.
Building a custom transcoding farm is best-suited for video platforms such as Netflix and YouTube. Companies that
ingest and stream millions of hours of video each week can build their own engineering operations around this task to
minimize the trade-offs.

Commercial software

A second solution is to use commercial video transcoding software. Services such as Amazon MediaConvert and
Zencoder offer powerful cloud-based transcoding.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 30


These services provide comprehensive solutions for businesses that need to support multiple formats, codecs, and
devices. Because these services specialize in video, they will be tuned for speed and adept at handling the error cases that
typically accompany transcoding jobs.
However, the number of transcoding options many of these services provide may be overkill for smaller businesses that
have fewer input and output requirements. And, as one might expect, these services tend to be a more expensive option.
Transcoding services are a good fit for media production companies (e.g. HBO, BBC) that produce massive amounts of
video content that lands on many devices, and don’t want to build large software engineering teams around video
transcoding.

Solutions compared
Let’s briefly review the options outlined above.

Custom pipelines provide the highest level of control over inputs and outputs. The bottleneck is technical expertise - to
really get the best performance and cost from this model, a business needs to build a dedicated software engineering team
around video transcoding.
By contrast, businesses can outsource their video transcoding to third-party services. These services provide the benefits
of speed and control, but at a higher cost.
These options are sufficient for some businesses, but the Bento team saw an opportunity here. Is it possible to build a
fast, low-cost transcoding pipeline, suited for businesses that don’t have video expertise and won’t need a plethora of
video options?
We felt that the answer was yes, and that the solution might lie in serverless technology.

SUBJECT NAME: CLOUD FACULTY NAME: Dr SUJEETH .T 31

You might also like