0% found this document useful (0 votes)
30 views7 pages

2021 Internet-Scale Video Streaming Over NDN

Uploaded by

Hasan Arifin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views7 pages

2021 Internet-Scale Video Streaming Over NDN

Uploaded by

Hasan Arifin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ACCEPTED FROM OPEN CALL

Internet-Scale Video Streaming over NDN


Chavoosh Ghasemi, Hamed Yousefi, and Beichuan Zhang

Abstract General NDN deployment requires software


installation and configuration on end-hosts which
Research in Information-Centric Networking many users do not want to or are not capa-
(ICN) and Named Data Networking (NDN) has ble of performing. Besides, most existing NDN
produced many protocol designs and software applications are developed from scratch with
prototypes, but they need to be validated and entirely different user interfaces, making users
evaluated by real usage on the Internet, which uncomfortable to use them. Also, users expect
is also critical to the realization of the ICN/NDN NDN services/applications to show a Quality of
vision in the long run. This article presents the first Experience (QoE) comparable to that of popular
Internet-scale adaptive video streaming service services they are accustomed to; otherwise, users
over NDN, which allows regular users to watch may give up on using NDN services. Therefore, to
videos using NDN technology with no software successfully attract general Internet users and real
installation or manual configuration. Since mid- traffic, the deployed service needs to be trans-
2019, the official NDN website [1] has started parent to end-users and have reasonably good
using our service to deliver its videos to Internet performance.
users over the global NDN testbed, showcasing In this article, we develop an adaptive bit-rate
the feasibility of NDN. We conduct real-world video streaming service over NDN by employing
experiments on the NDN testbed to validate the the NDN community’s rich collection of libraries
proper implementation of the client, network, and tools, developed and validated over almost
and server sides of the proposed system and also a decade. Since mid-2019, the NDN website [1]
evaluate the performance of our video streaming has been using our video streaming service [5]
service. instead of third-party services, like Youtube and
Vimeo, to deliver its video content to regular
Introduction Internet users over the NDN protocol. This allows
Significant changes in Internet usage have led to us to validate and evaluate our streaming service
the rapid development of information-centric net- over the NDN testbed by streaming videos of the
working (ICN) architectures, such as Named Data NDN website. The results confirm that:
Networking (NDN) [2, 3], which makes a funda- • Video delivery adapts to the highest feasi-
mental shift from address-centric to content-cen- ble quality based on the network condition
tric communications. Over almost a decade of while providing a smooth video playback.
research on ICN, many protocols, mechanisms, • The overall QoE in terms of startup delay,
and software prototypes have been designed and jitter, and re-buffering is satisfactory.
implemented; however, they have often been • In-network caching helps improve the over-
evaluated in simulations or small-scale demonstra- all performance of video streaming over the
tions. At this stage of architectural development, NDN testbed.
what is keenly needed is a trial deployment on the This work also shows the feasibility of NDN
Internet that can attract real users and generate technology to operate in the wild in real-world
real traffic. Such a deployment will demonstrate settings. The experiments, however, reveal a num-
NDN’s feasibility in real-world settings and its ber of interesting open problems on in-network
benefits to applications while providing necessary caching to investigate in the NDN community.
data to validate and evaluate existing designs and We open-sourced our software and published
implementations. its demos and tutorials on the project’s website
We direct the main focus of this article toward [5] to incentivize developing more such services/
realizing an adaptive video streaming service applications over NDN.
entirely over NDN and deploying it over the glob-
al NDN testbed for general Internet users. We
choose video streaming as it is one of the most
Background and Related Work
popular applications on the current Internet, con- Basics of Video Streaming
tributing to the majority of modern Internet traffic, After HTML5 standardization, video streaming
and thus offering a better chance to attract users over HTTP rapidly became popular as browsers
and traffic. (According to the Cisco VNI report could playback an embedded video with no need
[4], video traffic will account for 82 percent of for any plugin. To improve QoE and maximize
total Internet traffic by 2022.) connection utilization, “adaptive bit-rate” (ABR)
Challenges in deploying NDN services in the video streaming has been proposed. Adaptive-
wild are two-folded: usability and performance. ness is also vital to today’s streaming services for
Digital Object Identifier:
10.1109/MNET.121.1900574 Chavoosh Ghasemi and Beichuan Zhang are with The University of Arizona; Hamed Yousefi is with Aryaka Networks.

174 0890-8044/21/$25.00 © 2021 IEEE IEEE Network • September/October 2021

Authorized licensed use limited to: Institut Teknologi Bandung. Downloaded on May 31,2023 at 07:18:03 UTC from IEEE Xplore. Restrictions apply.
YOUSEFI_LAYOUT.indd 174 10/22/21 10:19 PM
supporting various types of end-user devices with
different hardware/software resources. To support
the ABR feature, the server needs to prepare a set
of different versions (called representations) of
each video file, each of which with different char-
acteristics (e.g., frame size, bit-rate, codec, and so
on) and package them according to a packaging
standard such as HTTP Live Streaming (HLS). The
packaging process divides each representation
into a series of small segments, each of which
contains a few seconds of video, and generates
the playlist files to reference the segments, and
the browser on the end-user side needs to decide
which representation to retrieve from the server
for smooth but high quality streaming by actively
monitoring its resources. Modern browsers, how- FIGURE 1. Overview of video streaming over the NDN testbed.
ever, do not natively support bit-rate adaptation
logic, so third-party JavaScript libraries have been
developed to enable this feature. To support this plicated and time-consuming. Moreover, native
feature, this work employs Shaka Player [6], a ICN applications are hard to deploy because
light and fully modular library that supports most they require end-users to install new software
operating systems and modern browsers. and applications. There is another line of work in
the literature attempting to make existing appli-
bAsIcs of nAmed-dAtA netWorkIng cations work on top of ICN without modifying
NDN is a large collaborative project to design and them [9]. As the main advantages of these works,
implement a scalable, resilient, and secure future they require less development effort, and they can
Internet and is architected to distribute content utilize most of ICN’s benefits. However, end-users
to millions of users efficiently. NDN replaces IP still cannot use these applications without manual
addresses in network packets with content names configuration or installing new software.
and both requests (i.e., Interests) and responses The current article, on the other hand, is an
(i.e., Data) carry the name of the solicited content attempt to develop a service that end-users can
rather than a source/destination address. Thus, use with zero manual configuration and software
instead of asking the network to deliver pack- installation. This work is less involved in terms of
ets to a specific destination, content consumers development effort. However, compared to a
request content and the network can retrieve it native application, it uses limited benefits of ICN.
from anywhere. Names in NDN are hierarchically There are other valuable works in the literature
structured and composed of multiple components [10–12] that reveal several technical challenges
separated by “/,” like /ndn/video/playlist. and performance issues to adapt existing applica-
m3u8. In NDN, every node maintains a table tions to the content-based paradigm. However,
(called a PIT) that keeps track of the Interests cur- they are beyond the scope of this article.
rently waiting for the corresponding Data packet.
Returning Data packets follow the PIT entries, as ndn VIdeo streAmIng serVIce
if they were following a trail of breadcrumbs. This Figure 1 gives the system overview of our video
two-way Interest–Data exchange allows any NDN streaming service and the specific steps involved.
node to monitor the performance of its own data On the server side, video files are prepared and
retrieval in terms of delay, jitter, loss, congestion, served as NDN packets; on the client side, brows-
and so on, and adapt itself to the network con- ers load a set of JavaScript libraries to support
dition. These fundamental changes enable sev- adaptive bit-rate streaming and NDN function-
eral interesting features in the network such as ality; in the middle, the NDN testbed connects
native multicast and in-network caching, native the server and client, and forwards NDN packets
multipath and multisource, and stateful and resil- between them.
ient packet forwarding. Moreover, all Data pack-
ets in NDN are signed by their producers so that sYstem desIgn
they are protected when stored and transferred. Server Side: To support various user devices
These features make NDN a natural fit for deploy- and networks, for each video, we generate a set
ing a content distribution system, such as a video of different representations with different resolu-
streaming service, as the network can efficiently tions and encodings on the sever side. We then
and reliably retrieve video segments from differ- package these sets of representations using the
ent sources through different paths while receiv- HLS standard, with a segment size of 2–4 seconds
ers can check the integrity of the Data packets, for a fair trade-off between encoding efficiency
independently. and the flexibility when adapting to bandwidth
changes [13].
relAted Work To stream a video in HLS format, all a brows-
A number of works in the literature designed and er needs is file transfer functionality over HTTP.
implemented native ICN applications [7, 8]. These To make the browser use NDN instead of HTTP,
works successfully re-engineered existing appli- we need to divide each file into chunks on the
cations based on the ICN paradigm so that the server side, prepare these chunks as NDN Data
implemented/prototype applications can exploit packets with proper signatures, and serve them
the full architectural benefits of ICN. However, when corresponding Interests are received. We
developing such applications can be very com- have developed a tool to partition each repre-

IEEE Network • September/October 2021 175

Authorized licensed use limited to: Institut Teknologi Bandung. Downloaded on May 31,2023 at 07:18:03 UTC from IEEE Xplore. Restrictions apply.
YOUSEFI_LAYOUT.indd 175 10/22/21 10:19 PM
resource request is represented by a URL, and
the resource can be any file (e.g., playlist, video,
or audio segment) for playing back the solicit-
ed video. To resolve a resource request, brows-
ers provide applications with Network Request
APIs (NRA) (left box in Fig. 3). An NRA sends
an HTTP request over TCP to the destination to
fetch the requested resource. Upon receiving
the resource, it forms a proper HTTP response
containing the received data and sends it back
to the caller (e.g., video player).
To make the video player (or any HTTP-
based application in general) use NDN for net-
work communications, we need to override the
handler of resource requests. We can directly
override the built-in NRA of browsers by load-
ing custom JavaScript code on the client side to
get NDN functionalities. This leaves the applica-
tion intact while redirecting its network request
calls to a custom JavaScript code. From there
FIGURE 2. Architecture of client and server sides of our NDN video streaming on, NDN takes care of fetching the solicited
service. resource and sends a proper HTTP response
back to the video player upon retrieving the
resource. However, this requires our code to
be able to translate between HTTP packets and
NDN packets, which is a significant challenge
due to multiple features and header options of
the HTTP protocol. Thus, as the right side of Fig.
3 shows, instead of employing browsers’ built-in
NRA, our video application makes all its network
requests to a third-party JavaScript library, that
is, Shaka Player. To let this library employ NDN
technology to handle network requests, we have
developed a new network module that employs
NDN-JS and added it to Shaka Player by which
all network requests will be resolved over NDN
protocol; that is why we refer to it as a modified
version in Fig. 3.
As shown on the left side of Fig. 2, when the
NDN network module loads, it connects to an
NDN node (probably one of the NDN testbed
nodes) via WebSocket. After receiving a resource
FIGURE 3. How browsers stream videos over IP vs NDN. request from the video player, the NDN module
starts expressing Interests to fetch the chunks of
the solicited resource. A resource request con-
sentation of a video into a series of chunks, sign tains protocol and URI path. For both the HTTP
them, and store them in a Mongo database. Due and HTTPS protocols, the request is redirected to
to space limitation, we do not detail the design of the NDN module as we use the NDN protocol
namespace for the Mongo database and refer the for client-server communications. Usually, the cli-
readers to [5]. ent will retrieve the playlist file first, then fetches
Each chunk can then be served in its entire- individual video and audio chunks. A basic name
ty within a Data packet by issuing an Interest discovery mechanism is involved in retrieving
for the said Data packet. The naming of these each file. For example, to fetch the playlist file the
chunks follows a convention such as /ndn/web/ client sends the first Interest with name /ndn/
video/<video-file-name>/<version> web/video/foo/playlist.m3u8 where it
/<chunk-number>. As shown on the right side does not yet know the version number of the
of Fig. 2, we have developed an NDN file server file. Upon receiving this Interest, the file server
to serve the stored chunks. Upon receiving an returns a Data packet containing the full name of
Interest, the file server reads the corresponding the solicited content (including version and chunk
chunk from the database, packages it as a Data number) as well as the content of the first chunk
packet, and sends it out. If the requested chunk of the playlist.m3u8 file. Upon receiving this
does not exist, a NACK will be sent back. Data packet, the client learns the version number
Client Side: Employing JavaScript technolo- and starts sending Interests with a full name to
gy for video streaming provides a solution with retrieve the remaining chunks.
zero client-side configuration, cross-platform Network: The NDN testbed is an overlay net-
support, and transparent service. Figure 3 com- work comprised of over 32 member institutions,
pares how existing technology streams a video mostly universities, from around the world in sup-
over HTTP versus how NDN does the job. Here, port of building a worldwide research network for
each video segment is a resource for which the development and testing of emerging ICN proto-
video player makes a network request call. A cols, services, and applications. By connecting a

176 IEEE Network • September/October 2021

Authorized licensed use limited to: Institut Teknologi Bandung. Downloaded on May 31,2023 at 07:18:03 UTC from IEEE Xplore. Restrictions apply.
YOUSEFI_LAYOUT.indd 176 10/22/21 10:19 PM
new file server to the testbed, the nameprefix of
the file server’s content will be propagated to the
entire testbed. When Interests for this nameprefix
arrive, the testbed nodes know how to forward
them. If there are multiple sources (producer
or temporary caches) at different locations, the
testbed routing protocol and forwarding strate-
gy will decide the best way to forward Interests.
For routing security purposes, each nameprefix
announcement needs to be properly authorized.
Therefore, the content providers need to obtain
a valid certificate for the nameprefix (e.g., /ndn/
web) from the testbed administrator before mak-
ing the announcement.
We employ an HTTP-based service called
“Find Closest Hub” (FCH), running on the NDN FIGURE 4. Adaptive bit-rate feature of the service while throttling the egress
testbed, that allows any client to find geographi- bandwidth of the file server.
cally nearby testbed nodes. However, the closest
node does not necessarily provide the best perfor- eXPerImentAl setuP
mance due to different reasons, for example, the The core network in our experiments is the global
node’s workload or its network condition. Thus, NDN testbed, and our file server that serves NDN
on the client side, our service evaluates resolved website videos resides in Portland. The end-user
nodes from the FCH service to learn how fast is in Arizona and watches several videos on the
each node responds and then chooses the one NDN website. Due to the space limitation, we
with the shortest response time. (In the rest of the only report the average results for single-video
article, we refer to this node as a gateway). The scenarios for one week-long period.
client then makes a WebSocket connection to For each video on the NDN website, our
the chosen testbed node and uses it for all NDN server provides the end-user with five different
communication. representations to support different connection
Putting everything together, Fig. 1 illustrates qualities. Here are the resolution and minimum
all the steps involved in this video streaming ser- required bandwidth of each representation: 240p
vice. First, video files are prepared at the server (very poor): 0.6Mb/s || 360p (poor): 0.9Mb/s
and their name prefixes are announced into the || 480p (medium): 1.8Mb/s || 720p (HD):
testbed. Second, the client uses a web browser 3.3Mb/s || 1080p (full HD): 6.3Mb/s.
to access a web page that contains all required
JavaScript libraries. Third, the NDN network mod- results
ule calls FCH and provides the client with a num- Adaptive Bit-Rate Streaming: An essential
ber of nearby testbed nodes. Fourth, the client feature of our service is the support of adaptive
connects to its gateway and upon starting the streaming, enabled by employing a third-party
video, the video player sends resource requests JavaScript library. To validate the ABR feature,
which are turned into NDN Interests, and when Fig. 4 shows the consumer’s behavior while the
NDN Data comes back, they are returned to the egress bandwidth of the file server is throttled.
player as video files. Note that the entire process To disable the contribution of in-network cach-
is transparent to the end-user. ing in this scenario, the end-user establishes a
direct tunnel to the server. The player keeps
eVAluAtIon looking at content retrieval performance (e.g.,
In this section, we first validate the adaptive bit- RTT, number of timeouts, and so on) and switch-
rate feature of the service. We then present the es between available bit-rates (i.e., video resolu-
evaluation results on the client, network, and serv- tions) based on network bandwidth estimation at
er sides of the service, providing a clear under- a given moment in time.
standing of each side’s performance. On the We set the server’s egress bandwidth to
client side, we measure QoE in terms of expe- 20Mb/s and decrease it step by step. After start-
rienced video quality, number of re-bufferings up, the player switches to full HD resolution and
during video playback, and video startup delay. stays with it until the server bandwidth reach-
On the network side, we focus on Interest-Data es 8Mb/s. From there on, the video resolution
RTT and jitter while revealing the role of in-net- downgrades until it switches to 240p, where the
work caching. On the server side, we show the server’s and player’s estimated bandwidths drop
server’s contribution to overall content retrieval down to 1Mb/s and 0.8Mb/s, respectively. On
delay by measuring how fast it responds to incom- the other hand, by increasing the server’s egress
ing Interests. It is worth mentioning that all perfor- bandwidth, the player improves the resolution. At
mance measurements in this section are reported one point (shown by circles), the video resolution
by our statistics collector tool which is not pre- drops to 720p because of the network conges-
sented in this article due to the page limit. We tion effect on the player’s bandwidth estimation.
refer the reader to [5] for design and implementa- Client-Side Evaluation: Three performance
tion details of this tool. metrics play vital roles in measuring the QoE on
We evaluate the performance of the service in the client side:
two different scenarios, including no-cache, when • Startup delay, the amount of time that the
no part of the video has been cached in the net- end-user waits until the video starts playing
work; and with-cache, when some parts of the • Number of re-bufferings, the number of
video have already been cached in the network. stops/stalls during a video playback

IEEE Network • September/October 2021 177

Authorized licensed use limited to: Institut Teknologi Bandung. Downloaded on May 31,2023 at 07:18:03 UTC from IEEE Xplore. Restrictions apply.
YOUSEFI_LAYOUT.indd 177 10/22/21 10:19 PM
(a) (b)

FIGURE 5. Experienced bandwidth; (b) video quality in client side during video playback.

• Video quality, the video resolution that the cache scenarios is less than 53 percent (while
end-user experiences. 92 percent of the 720p version of the video had
The video startup delay in the no-cache sce- been cached in the network). This shows that
nario is 4.19 seconds. The startup delay includes the end-user does not fully exploit the in-network
the delay of loading necessary resources (includ- caching. Although it is beyond the scope of this
ing JavaScript libraries) and buffering a sufficient article, we believe that employing a layered video
amount of video (i.e., 2–4 seconds). According encoding technique can address this problem.
to [14], the end-users prefer a startup below two This technique allows the end-user to retrieve
seconds, and each incremental delay of one sec- 720p video from the gateway and request only
ond can result in a 5.8 percent increase in the the remaining enhancement layers for 1080p rep-
user abandonment rate. Thus, there is still room to resentation from the server. This way, not only the
improve the service and NDN system. Moreover, end-user can experience full-HD video for proba-
in none of the runs, the end-user has experienced bly the entire video playback, but also the cache
re-buffering during the video playback. hit ratio increases (and server workload decreas-
In-network caching role: We also investigate es) by 92 percent instead of 53 percent.
how in-network caching can affect the user’s Network-Side Evaluation: In this section, we
QoE. Among all the runs, we have chosen the study the network-related performance metrics.
one that fairly represents the average behavior of As mentioned earlier, each video is composed
the system in terms of video quality experienced of several files (e.g., video and audio segments,
by the end-user, and Fig. 5 compares no-cache playlist files, and so on), and each file may include
and with-cache scenarios for that run. For all with- one or more chunks. For a given file, we calcu-
cache runs, we made sure that the network has late the average retrieval delay of its chunks and
already cached a 720p version of videos before report it as average Interest-Data RTT (or RTT
the end-user in Arizona starts watching the videos. for short). For a particular chunk, RTT counts the
Thus, in the with-cache scenario, when the end-us- time between the latest sent Interest to the ear-
er starts to watch the videos, the player fetches liest received Data packet for the said Interest.
the 720p version from the gateway (Arizona hub). Figure 6 depicts cumulative distribution function
Because the channel between the end-user and (CDF) of the average RTT of all retrieved files.
its gateway is pretty fast, the ABR logic in the play- In the no-cache scenario, the average RTT of 90
er estimates that the network is good enough to percent of the files falls in the range of 78ms to
switch to a better video resolution, asking for a 104ms. (According to [15], for smooth video play-
full-HD version. However, the full-HD version is back, RTT should be kept below 150ms, which
not cached in the network. Therefore, the play- shows that our service’s latency is acceptable.)
er senses the channel to the producer (file serv- On the other hand, in the with-cache scenario,
er). This time, it experiences a longer delay and the average RTT of 90 percent of the files falls in
lower network throughput. Thus, the bandwidth the range 40ms to 100ms. This improvement is a
estimation decreases and the player switches to direct result of retrieving over 50 percent of the
the 720p video. Looking at the figures, we can video from the end-user’s gateway.
see that the end-user’s estimated bandwidth and Another essential metric to consider is jitter
experienced video quality oscillate, which is the that shows delay variance of successive packets.
result of in-network caching. Jitter is important because any significant changes
Although comparing the with-cache and in it can cause re-buffering and thus a poor perfor-
no-cache scenarios in Fig. 5 shows that caching mance. Figure 6 shows that the average jitter of
improves the video quality experienced by the all files in the no-cache scenario is less than 13ms,
end-user, in-network caching misleads ABR logic which meets the basic standards of video-on-de-
as it causes the player to sense more than one mand service. (According to [5], keeping jitter
channel during the video playback. The issue below 30ms leads to a fluent video playback).
is that the traditional ABR logic is designed for On the other hand, in the with-cache scenario,
one-channel communication while NDN is a the percentage of reported files with jitter 13ms
multi-channel environment. This problem is dis- is 81 percent. Moreover, based on our results,
cussed in detail in [10, 12]. As another revealed in-network caching can cause the average jitter
issue from our results, the cache hit ratio in with- to exceed 30ms at some points. This is because

178 IEEE Network • September/October 2021

Authorized licensed use limited to: Institut Teknologi Bandung. Downloaded on May 31,2023 at 07:18:03 UTC from IEEE Xplore. Restrictions apply.
YOUSEFI_LAYOUT.indd 178 10/22/21 10:19 PM
of fetching the content from two different nodes
(i.e., the gateway and server) during watching the
video that magnifies the delay changes. Although
the effect of jitter on video playback is mitigated
by buffering, we see that in-network caching can
potentially worsen the jitter of a video streaming
service. A viable solution to this issue is to pre-
fetch Data packets of the solicited content at the
gateways. More specifically, upon receiving the
very first Interests for a content, the end-user’s
gateway starts fetching a sufficient number of
upcoming Data packets to satisfy the end-user’s
future Interests from its cache, right away. This
approach minimizes jitter by bounding the Inter-
est-Data delay to the RTT between the end-user
and its gateway. That said, NDN comes with a
FIGURE 6. a) Average Interest-Data RTT and jitter for each file during video play-
back.
module, called forwarding strategy, which allows
researchers and developers to rule whether,
when, and how to forward the Interests on each net, the current work brings it to everyday users
network node. We believe an interesting way to and showcases NDN’s capability to work in real-
realize this solution is to design a new forwarding world settings. This system has been serving over
strategy equipped with a pre-fetching mechanism. hundreds of gigabytes of the official NDN website
Server-Side Evaluation: On the server side, to thousands of daily Internet users during the
an important metric to consider is the server’s past few months.
response time that shows the server’s contribution
to content retrieval delay. It includes the time for oPen Problems
processing a received Interest, reading the asso- By involving a large number of users, this work
ciated bytes from disk, creating a Data packet, will significantly help the NDN community find
and sending it out/back. Our results show that NDN’s design and implementation problems,
over 98 percent of received Interests during video opening up an avenue for future research. This
playback for all runs are satisfied in less than 5ms. article reveals three interesting problems caused
This shows a reasonable implementation of our by in-network caching for time-sensitive applica-
file server and clarifies that the server is not the tions:
bottleneck in the content retrieval pipeline. • Traditional ABR mechanisms cannot be
directly used in NDN as they have been
dIscussIon designed for single-channel networks, while
in-network caching makes NDN a complete
runnIng eXIstIng APPlIcAtIons oVer ndn multi-channel environment. We believe
As a critical deployment challenge, ICN/NDN designing a multi-channel ABR algorithm is a
needs the existing applications to be rewritten or viable approach to address this issue.
modified in order to understand the NDN pro- • In-network caching shows poor performance
tocol and fully receive its architectural benefits. with traditional video encoding techniques
From-scratch and proxy-based solutions are two for each video representation is encoded as
traditional lines of work that try to address this a separate file. We believe employing a lay-
issue, both of which have an involving develop- ered video encoding technique can mitigate
ment process and require end-users to install new this problem.
software or apply manual configurations. This arti- • In-network caching intrinsically causes high
cle, on the other hand, opens a new line of work delay variances (jitter) on the end-user side.
in the literature and introduces a new method of We suggest designing a new forwarding
thinking to run existing applications/services over strategy with a pre-fetching mechanism to
NDN: alleviate this problem.
• Without modifying the application; we pro-
pose to develop an NDN module to handle conclusIon
all application’s network interactions on the This work presented the system design of an
front-end. adaptive video streaming service over NDN as
• Without involving the end-user; we propose the first public ICN/NDN service for daily Internet
to employ zero-configuration technologies users and detailed its deployment on the global
like JavaScript on the front-end to enable NDN testbed. Our experimental results validated
NDN functionalities on-the-fly. the proper design and implementation of our soft-
• With exploiting NDN’s architectural benefits; ware and a reasonable QoE of the service over
we propose to design and implement the the global NDN testbed using the videos of the
application’s back-end over NDN. official NDN website.

ndn’s feAsIbIlItY In tHe reAl World reFereNCes


Since the very beginning, the NDN community [1] “Named-Data Networking Official Website,” 2019; available:
raised a major question, yet with no answer: “Can https://named-data.net.
NDN work in the wild?” This work, for the first [2] L. Zhang et al., “Named Data Networking,” ACM SIGCOMM
Comput. Commun. Rev., vol. 44, no. 3, 2014, pp. 66–73.
time to our best knowledge, gives a clear answer [3] C. Ghasemi et al., “On the Granularity of Trie-Based Data
YES to this question. While NDN has been mostly Structures for Name Lookups and Updates,” IEEE/ACM
an academic research project for the future Inter- Trans. Netw., vol. 27, no. 2, 2019, pp. 777–89.

IEEE Network • September/October 2021 179

Authorized licensed use limited to: Institut Teknologi Bandung. Downloaded on May 31,2023 at 07:18:03 UTC from IEEE Xplore. Restrictions apply.
YOUSEFI_LAYOUT.indd 179 10/22/21 10:19 PM
[4] “Cisco Visual Networking Index: Forecast and Methodolo- [15] B. Muralidharan, “Video Quality of Service Tutorial,” 2017;
gy, 2017–2022,” Feb. 2019; available: https://www.cisco. available: https://www.cisco.com/c/en/us/support/docs/
com/c/en/us/solutions/collateral/serviceprovider/ visu- quality-ofservice- qos/qos-video/212134-Video-Quali-
al-networking-index-vni/white-paper-c11-741490.html ty-of-Service-QOSTutorial. html.
[5] “iViSA project,” 2019; available: https://ivisa.nameddata.
net. Biographies
[6] “Shaka Player JavaScript Library,” 2019; available: https:// Chavoosh Ghasemi is a Ph.D. student in the Department of
github.com/google/shaka-player. Computer Science at the University of Arizona. He graduat-
[7] V. Jacobson et al., “VoCCN: Voice-over Contentcentric ed from Sharif University of Technology in 2014, majoring in
Networks,” Proc. Workshop on Re-architecting the Internet information and communication technology engineering. His
(ReArch’09), 2009, pp. 1–6. research interests include information centric networks, routing
[8] P. Gusev and J. Burke, “NDN-RTC: Real-Time Video Con- and forwarding protocol design, content delivery networks,
ferencing over Named Data Networking,” Proc. ACM Conf. name lookup data structures, and video streaming QoS.
Information-Centric Networking (ICN’15), 2015, pp. 117–26.
[9] T. Liang and B. Zhang, “NDNizing Existing Applications: Hamed Yousefi is currently a senior member of technical staff
Research Issues and Experiences,” Proc. ACM Conf. Informa- at Aryaka Networks. From 2016 to 2018, he was a postdoctoral
tion-Centric Networking (ICN’18), 2018, pp. 172–83. research fellow in the Department of Electrical Engineering and
[10] D. Nguyen, J. Jin, and A. Tagami, “Cache-Friendly Stream- Computer Science, University of Michigan. He received his
ing Bitrate Adaptation by Congestion Feedback in ICN,” Ph.D. in computer engineering from Sharif University of Tech-
Proc. ACM Conf. Information-Centric Networking (ICN’16), nology in 2015. His research interests include future Internet/
2016, pp. 71–76. network architectures, information-centric networks, content
[11] J. Samain et al., “Dynamic Adaptive Video Streaming: delivery networks, and software-defined wide area networks.
Towards a Systematic Comparison of ICN and TCP/IP,” IEEE
Trans. Multimedia, vol. 19, no. 10, 2017, pp. 2166–81. B eichuan Z hang received the B.S. degree from Peking Uni-
[12] R. Grandl et al., “On the Interaction of Adaptive Video versity, Beijing, China, in 1995, and the Ph.D. degree from the
Streaming with Content-Centric Networking,” 2013. University of California, Los Angeles (UCLA), CA, USA, in 2003.
[13] S. Lederer, “Optimal Adaptive Streaming Formats MPEG- He is an associate professor in the Department of Computer
DASH & HLS Segment Length,” 2015; available: https:// Science, University of Arizona, Tucson, AZ, USA. His research
bitmovin.com/mpeg-dash-hls-segment-length/. interest is in Internet routing architectures and protocols. He has
[14] S. S. Krishnan and R. K. Sitaraman, “Video Stream Quality been working on named data networking, green networking,
Impacts Viewer Behavior: Inferring Causality Using Quasi-Ex- inter-domain routing, and overlay multicast. He received the first
perimental Designs,” IEEE/ACM Trans. Networking, vol. 21, Applied Networking Research Prize in 2011 by ISOC and IRTF,
no. 6, 2013, pp. 2001–14. and the Best Paper Award at IEEE ICDCS in 2005.

180 IEEE Network • September/October 2021

Authorized licensed use limited to: Institut Teknologi Bandung. Downloaded on May 31,2023 at 07:18:03 UTC from IEEE Xplore. Restrictions apply.
YOUSEFI_LAYOUT.indd 180 10/22/21 10:19 PM

You might also like