YouView Core Technical Specification 1.0
YouView Core Technical Specification 1.0
Page 2 of 229
Table of Contents
1
1.1 1.2 1.3 1.4
This Document
Introduction Relationship to DTG D-Book 7 Implementation Typographical conventions
11
11 11 12 12
Chapter I
1. 2. 3. 3.1. 3.2. 3.3. 3.4. 3.5. 3.6. 3.7. 3.8. 3.9. 3.10.
Overview
Chapter Summary Consumer Device Operating Environment Summary of Core Requirements Consumer Device Hardware Consumer Device Software Consumer Device Software Management Consumer Device Storage Management Consumer Device UI Model Content Delivery and Content Protection UI Management and Presentation Engine Integration Diagnostics and Reporting Platform metadata Accessibility
13
14 15 16 16 16 16 17 17 17 17 18 18 18
Chapter II
1. 1.1. 2. 3. 3.1. 3.2. 3.3. 3.4. 3.5. 3.6. 3.7. 3.8. 3.9. 3.10. 3.11. 3.12. 3.13. 3.14.
21
22 22 23 26 26 26 27 28 28 28 29 30 32 33 33 38 39 39
Page 3 of 229
Front panel buttons, indicators and displays User Input Device User input functions Additional control codes Physical requirements Keyboard input
39 40 40 41 41 42
Chapter III
1. 1.1. 2. 3. 3.1. 3.2. 3.3. 4. 4.1. 4.2. 4.3. 4.4. 4.5. 4.6. 4.7. 4.8. 5. 5.1.
43
44 44 45 46 46 46 47 48 48 48 48 48 48 48 49 49 50 50
Chapter IV
1. 1.1. 2. 2.1. 2.2. 2.3. 2.4. 2.5. 2.6. 2.7. 3. 3.1. 4.
53
54 54 55 55 55 55 55 55 56 57 58 58 59
Page 4 of 229
Chapter V
1. 1.1. 2. 3. 3.1. 3.2. 3.3. 3.4.
61
62 62 63 64 64 64 65 66
Chapter VI
1. 1.1. 2. 2.1. 2.2. 2.3. 2.4. 3. 4. 4.1. 5. 5.1. 5.2. 5.3. 5.4. 5.5. 5.6. 6. 6.1. 6.2. 6.3. 6.4. 6.5. 6.6. 6.7. 7. 7.1. 7.2. 7.3. 7.4. 7.5.
67
68 68 69 69 69 69 69 71 72 72 74 74 74 74 74 74 74 75 75 75 77 77 78 79 79 82 82 83 86 87 87
Page 5 of 229
Chapter VII
1. 1.1. 2. 2.1. 2.2. 2.3. 2.4. 2.5. 2.6. 3. 3.1. 3.2. 3.3. 4. 4.1. 4.2. 4.3. 5.
89
90 90 91 91 91 91 91 91 91 92 92 92 92 93 93 93 93 94
Chapter VIII
1. 1.1. 2. 3. 3.1. 3.2. 3.3.
95
96 96 97 98 98 99 100
Chapter IX
1. 1.1. 2. 2.1. 2.2. 2.3. 2.4. 3. 3.1. 3.2. 3.3. 4. 4.1.
IP Content Delivery
Chapter Summary Chapter audience Streaming Protocols Introduction HTTP progressive download Adaptive bitrate streaming over HTTP Live streaming using UDP Content Download Introduction HTTP content download Storage of downloaded content Content Formats Codecs
101
102 102 103 103 103 109 112 113 113 113 113 114 114
Page 6 of 229
MPEG-2 single program transport stream MP4 container Timed Text subtitles Audio elementary streams
Chapter X
1. 1.1. 2. 3. 3.1. 3.2. 3.3. 3.4. 3.5. 3.6. 3.7. 3.8. 3.9. 3.10.
123
124 124 125 126 126 126 128 129 131 132 135 135 136 138
Chapter XI
1. 2. 2.1. 2.2. 3. 3.1. 3.2. 3.3. 3.4. 3.5. 3.6. 3.7. 4. 5. 5.1. 5.2. 6. 6.1. 6.2. 6.3. 6.4. 7.
141
142 143 143 143 144 144 144 146 147 148 149 151 153 154 154 154 155 155 155 155 156 157
Page 7 of 229
7.1. 7.2.
157 157
Chapter XII
1. 2. 2.1. 2.2. 2.3. 3. 3.1. 3.2. 3.3. 3.4. 4. 4.1. 4.2. 4.3. 5. 6. 6.1. 6.2. 6.3. 6.4. 7. 7.1.
159
160 161 162 162 162 163 163 164 164 165 166 166 166 166 167 168 168 168 169 169 170 170
Chapter XIII
1. 1.1. 2. 2.1. 3. 3.1. 3.2. 4. 4.1. 4.2. 4.3. 5. 5.1. 5.2. 5.3. 5.4.
171
172 172 173 173 175 175 175 177 177 177 179 181 181 181 183 184
Page 8 of 229
6. 6.1. 6.2. 6.3. 6.4. 7. 7.1. 7.2. 8. 8.1. 8.2. 8.3. 8.4. 9. 9.1. 9.2. 9.3.
Window Manager Introduction Window lifecycle User Input Event dispatch Window Property Manipulation UI Manager Introduction Viewer Perceived Device State Configuration Application Player Manager configuration Application Manager configuration Window Manager configuration Application Policy Resource Management Management of available memory Management of available CPU Watchdog functions
185 185 185 186 187 190 190 190 191 191 191 191 191 192 192 192 192
Chapter XIV
1. 1.1. 2. 3. 3.1. 3.2. 3.3. 4. 4.1. 4.2. 4.3. 4.4. 5. 6. 7.
195
196 196 197 198 198 199 200 201 201 201 201 202 203 206 207
Chapter XV
1. 1.1. 2. 2.1. 2.2. 2.3. 2.4.
MHEG Integration
Chapter Summary Chapter audience Graphics Acceleration DirectFB integration Graphics rendering Font rendering Graphics plane state
209
210 210 211 211 211 211 211
Page 9 of 229
Channel Tuning User Input DirectFB to MHEG key mappings User input register Networking Application Lifecycle UIME can kill an MHEG application UIME is notified of an MHEG application UIME is notified that an MHEG application has quit Launching other applications Receiving launch parameters
212 213 213 214 215 216 216 216 216 216 216
Chapter XVI
1. 1.1. 2.
217
218 218 219
Chapter XVII
1. 2. 2.1. 2.2. 2.3. 3. 3.1. 3.2. 3.3.
221
222 223 223 223 223 225 225 225 225
227
228 229
Page 10 of 229
1 This Document
1.1 Introduction
At YouView, were committed to helping develop common technical standards for connected TV. Such standards bring significant advantages, allowing any manufacturer to maximise their investment by re-using their developments in complex technologies across multiple markets. Common standards also mean content providers can more easily share their content widely across any connected TV device, and be part of a wider ecosystem. But making this happen isnt easy; it requires not only knowledge and ability but also a willingness and commitment to participate in such an activity. YouView and our partners have invested heavily in researching and developing the enabling technology for connected TV. Weve also been very active in industry standardisation, primarily as members of the Digital Television Group (DTG) in the development of the D-Book and connected TV standards. Harmonisation with other European standardisation activity has been sought with HbbTV and the EBU through close liaison via the DTG and more directly where appropriate. The aim is to maximise the common underlying technology that underpins a range of platforms, including YouView. By publishing this YouView Core Technical Specification we are giving industry visibility of the technologies within a YouView device, thereby promoting further the commonality across connected TV implementations. In this document we provide the specification of a set of technologies to enable the delivery of broadcast and IP delivered content onto a television set via a broadband connected device. This specification covers the overall architecture of the YouView consumer device as well as the interfaces that promote content interoperability. In addition to these key interoperability elements, we have provided additional insight into YouView specific areas to give a greater clarity and guidance through example implementation detail. The technologies described in this final core technical specification for launch are intrinsic to the first YouView devices. Weve made this specification available here, to help device manufacturers bring products to market utilising the same underlying technologies as a YouView device regardless of whether or not they choose to use our brand. This will help create a competitive market for connected TV devices in the UK and beyond which means more innovation, cheaper devices and more choice for consumers. To assist manufacturers who want to make a YouView branded device, the specification provides our minimum hardware requirements, allowing an informed choice to be made about underlying hardware components. From this document, a manufacturer will also be able to determine the scale and scope of work effort and expertise required, by identifying where their existing device architectures may differ from the YouView architecture. For clarity, this specification in isolation does not enable a manufacturer to build a YouView branded device. Due to the complexities inherent in the nascent connected TV environment, it is necessary for manufacturers to work closely with YouView to ensure that different devices operating within this environment deliver a performant, secure and robust experience for the viewer. This engagement enables the integration of the YouView platform user interface and ensures compatibility with YouView back-end services. For details on how to engage with us please visit www.youview.com/industry/devices Whilst this document is primarily intended for manufacturers, there are useful elements covering content formats and content protection that will be of interest to Content Providers, and information on IP-delivery of content and device configuration that are relevant to ISPs.
Page 11 of 229
Part A, the UK broadcast specification Part B, the architecture and protocols which may be used for interoperable IP delivered services in the UK
This YouView specification currently references D-Book Part A v1, which has been finalised by the DTG. It doesnt currently reference Part B which at the time of writing has been published as a beta v0.9. After the launch of YouView and once D-Book 7 Part B has been published in a non-beta form, YouView will simplify this specification by referencing relevant parts of D-Book 7 Part B, whilst maintaining the functionality currently defined here.
1.3 Implementation
The goal of these specifications is to ensure that certain functionality requirements are attained. Whilst we provide detailed suggestions as to implementation, these are suggestions only. Manufacturers may, for technical or other reasons, need or decide to meet the specified functionality requirements by alternative means. Such alternative means will meet the specification provided the required functionality is delivered.
Monospace
Italicised grey
Surrounding box
Page 12 of 229
Chapter I
Overview
Page 13 of 229
1. Chapter Summary
This chapter describes the operating environment in which a YouView consumer device will be deployed, and in doing so introduces the areas of specification covered by this document.
Page 14 of 229
Page 15 of 229
Page 16 of 229
The way in which the Core Device Software, Platform Software and Device Configuration are managed on the consumer device is specified in Chapter VI Consumer Device Software Management. The mechanism by which Content Provider applications are discovered and installed is specified in Chapter XII Application Discovery And Installation.
Requirements for protection (in delivery) and management (once on the device) of broadcast delivered content are as defined in the DTG D-Book 7.0 Part A. Note: Device Manufacturers may build a variant of a standard YouView consumer device, which includes a CA card slot and Conditional Access technology to support PayTV services over broadcast. Details of this are beyond the scope of this document. YouView consumer devices are also required to support broadband (IP) delivery of content and services as specified in Chapter IX IP Content Delivery. This includes support for: HTTP progressive download (PDL) HTTP adaptive bit-rate streaming IP Multicast Timed text subtitles
Requirements for protection (in delivery) and management (once on the device) of IP delivered content are defined in Chapter X Content Protection : AV Over IP. YouView consumer devices provide means for both broadcast and broadband delivered content to be acquired and stored on the device for later consumption by the viewer. The requirements for this are specified in Chapter XI Content Acquisition And Management.
Page 17 of 229
way in which applications of different types coexist and how the shared device resources required by those applications are managed. Chapter XIV Presentation Engine Integration specifies how application presentation engines must integrate with the Consumer Device Platform, the Consumer Device Software Architecture and the IP Network Stack to coexist with each other and to support the YouView UI model. Chapter XV MHEG Integration describes additional requirements for the integration of the MHEG-5 engine including those relating to the broadcast-driven lifecycle of MHEG-5 applications.
3.10. Accessibility
In addition to the requirements for Access Services already introduced and specified fully in Chapter VIII Broadcast Content Delivery and Chapter IX IP Content Delivery, YouView devices also support a
Page 18 of 229
range of accessibility features. These are provided through the functionality of the Platform Main UI, building on core features described in Chapter XIII Consumer Device UI Management and Chapter XIV Presentation Engine Integration.
Page 19 of 229
Chapter II
Consumer Device Platform
Page 21 of 229
1. Chapter Summary
This chapter defines a common low-level platform for all YouView consumer devices, which in addition to the obvious specification of required connectors etc. seeks to: 1. Ensure that a minimum level of consumer experience can be achieved. This involves the specification of hardware requirements that extend to the main SoC/processor, such as hardware AV codec support, hardware decryption support and hardware graphics acceleration. This is to allow a range of different SoCs to be used whilst still achieving the desired objective. This also involves the specification of how such hardware capability can be exposed to higherlevel software in a consistent, compliant manner. 2. Increase the inherent compliance of consumer devices through use of common software components. This relates to a formalisation of strong, existing industry trends around the use of common open-source software components, in line with the current or intended approach of many manufacturers. This chapter describes requirements for a YouView DTT HD DVR product.
This chapter is not relevant for: Application Developers ISPs Content Providers
Page 22 of 229
Function 2 2.1 Consumer device platform Core hardware CPU: 950 DMIPS+
Commentary
Specific silicon products proposed to be incorporated into a YouView device will be evaluated by YouView. Higher bandwidth may improve performance. See section 3.3. The amount of Flash memory required will depend significantly on which categories of data are stored in Flash and which on disk.
RAM: 512 Mbyte min Minimum DDR bandwidth: 3 Gbyte/sec. Flash memory
Y Y
2.2
Internal hard disk 320 GB min. Support for on disk encryption using AES128 or Triple DES. Y Y See section 3.3. Required for HD broadcast content.
2.3
Input / output USB 2.0 Ethernet port supporting 10BASE-T and 100BASE-TX Y Y Minimum of two: At least one on front panel and one on rear. For Internet connectivity and Home Networking. May be used in conjunction with fixed wire, power line adapter or wireless adapter as convenient in home. Integrated wireless Ethernet is optional. If not provided, the device must support the use of USB Wifi adapters.
Unique MAC address for Ethernet, programmed at Manufacture 2x dual-mode DVB-T/T2 tuners
Y Y RF characteristics as specified in DTG D-Book 7 Part A v1. DVB-T2 is used for HD services on digital terrestrial. Including support for HDCP, auto lipsync, CEC One-Touch Play and System Standby, Multichannel linear PCM audio, bitstream audio, YcbCr 4:2:2 12-bit, BT709 colorimetry.
1x HDMI v1.3
1x SCART Analogue HD outputs Modulated UHF output RF loop-through Separate stereo audio
Y E N Y N Excluded as part of rights management strategy. Strongly desired but not required. Remains powered in standby. Stereo audio may be provided on RCA sockets if present
Page 23 of 229
Function Digital audio (DTS or AC-3 bitstream or linear stereo PCM) Adjustable delay on audio output Serial I/O Common Interface Slot CA card slot 2.4 Peripheral devices Support for USB mass storage devices Support for USB wireless Ethernet adapter Support for USB human interface devices Support for USB Bluetooth adapter Support for SD card reader 2.5 Graphics 1280x720 32bpp ARGB8888 graphics plane Support for simultaneous display of subtitles, MHEG engine and top level navigation 2D graphics hardware acceleration 3D graphics hardware acceleration
Required
Commentary Optical or coaxial SPDIF connection. Accommodate display video delay (pref. Automatically)
Y Y N N N
Where provided, CI+ recommended. Could be added in pay operator specific variant of the device.
Y C Y N N
For presentation of content from USB drives. No requirement to export content. Required for domestic connectivity if the IEEE 802.11n standard is not supported internally. Required for accessibility applications.
Y Y Note: may be implemented using multiple hardware graphics layers or by hardware-assisted composition onto a single layer. Detailed specification provided in section 3.8 below. Not required. However, support for OpenGL ES 2.0 and/or OpenVG may allow for higher performance rendering for presentation environments that make use of vector shapes or non-rectangular bitmap transformations.
Y N
2.6
AV codecs SD video: MPEG-2 MP@ML 25Hz SD video: MPEG-4 part 10 (H.264/AVC) Main and High Profile Level 3.0 HD video: MPEG-4 part 10 (H.264/AVC) Main and High Profile Level 4.0 Audio: MPEG-1 Layer II Audio: Dolby AC-3 Audio: Dolby E-AC-3 (inc. transcode to AC-3 or DTS) Audio: Multi-channel HE-AAC (inc. transcode to AC-3 or DTS) Audio: Down-mix of E-AC-3 and multichannel HE-AAC to stereo Y Y As specified in DTG D-Book 7 Part A v1. As specified in DTG D-Book 7 Part A v1. High profile included as will already be supported for HD services. As specified in DTG D-Book 7 Part A v1. As specified in DTG D-Book 7 Part A v1. Decode of existing SD FTA terrestrial services.
Y Y N Y Y Y
Up to 5.1 channel surround sound. Up to 5.1 channel surround sound. As specified by DTG D-Book 7 Part A v1.
Page 24 of 229
Required
Commentary As specified by DTG D-Book 7 Part A v1. In stereo mode, good for low bitrate services. As specified by DTG D-Book 7 Part A v1 section 4.5. Requires dual decode and mix. Devices shall provide an option to enable AD on all outputs or only on SPDIF output. Elementary streams.
Y Y
Audio: MPEG-1 Layer III Content decryption 2.7 Remote control Minimum set of remote control functions Basic functions available from the device front panel Remote control design to observe requirements for accessibility
Y Y
Y N Y Standby button required, other buttons optional. See section 3.15 below. Includes: layout of keys; contrast between labelling and background; and presence of dedicated subtitle and, where relevant, audio description keys.
Single/common remote control design Remote control protocol and codes made available by manufacturers Free-field remote control 2.8 Operating system & standard libraries Embedded Linux 2.6.23 or later DirectFB graphics environment with multi-application core 2.9 Misc Fan-less operation
Y Y
Devices may have a fan subject to the requirements of section 3.5 below.
Page 25 of 229
3.1. Silicon
All consumer devices will need to be based on SoCs for which the ability to support the necessary functionality has been established.
3.2. Connectivity
3.2.1. Network
The device shall provide an auto-negotiating Ethernet port for connection to a TCP/IP network. The port shall support at least 10BASE-T and 100BASE-TX physical layers via an RJ-45/8P8C socket with MDI wiring as specified by the IEEE 802.3 standard. 1000BASE-T may also be supported. Multicast capability is required. Devices shall support half duplex mode, full duplex mode and full duplex with flow control. The default shall be to auto negotiate but devices must also support manual selection of a specific mode. It is recommended that the Ethernet port have link and traffic indicator lights to simplify the diagnosis of connection problems. The Ethernet MAC address shall be available to the configuration user interface for diagnostic purposes. Note: MAC address shall not be used as a device unique identifier for any other purpose. Devices shall support wireless networking, either using a built-in IEEE 802.11n adapter, or by supporting the configuration and use of optional USB IEEE 802.11g-compliant and IEEE 802.11ncompliant adapters. Devices may restrict the range of adapters that are supported, for example to limit the number of drivers required. There may be a minimum set of drivers defined to be supported in all devices. Use of WiFi Protected Setup (WPS) is recommended, to simplify configuration. Devices shall also provide options to configure the wireless interface manually by selecting from a list of SSIDs. Devices shall support WEP, WPA/PSK and WPA2/PSK encryption. PowerLine Telecommunications (PLT) shall be supported as an option for network connectivity, either built in to the device or via an external Ethernet-connected adapter. However, only solutions supporting ETSI TS 102 578 may be provided or recommended for use with connected television devices. Note: the intention here is not to require PLT to be built into the device (although this is an option) but to ensure that the device interoperates with external PLT adapters.
For DVB-T and DVB-T2, the RF characteristics from DTG D-Book 7 Part A v1 shall apply.
Page 26 of 229
A manual audio delay adjustment affecting analogue and SPDIF outputs (both PCM and nonPCM) shall also be provided, as specified in DTG D-Book 7 Part A v1. When used, this manual adjustment shall override the automatic adjustment. These adjustments shall not affect the timing of HDMI audio. The HDMI output shall use BT709 colorimetry. SD output modes shall not be used over HDMI unless the display offers no HD display mode; in all other cases, the device shall up-scale SD video to the configured HD output resolution. Devices are not required to offer any user-selectable option for HDMI output resolutions below 720p. The HDMI connection shall be monitored to detect when a display is connected or removed.
As specified in DTG D-Book 7 Part A v1, devices shall not have analogue component HD outputs. Devices shall have the hardware capability to output CGMS-A signalling on analogue SD outputs. However, by default, no CGMS-A restrictions shall be signalled. CGMS-A restrictions shall be signalled only when presenting protected IP-delivered content and only where specifically mandated by YouView specifications. Devices shall have at least one SCART connector supporting RGB and composite video. Devices may have an additional AUX SCART connector but this is not required. If present, loopthrough of composite video, RGB video and audio inputs from this SCART to the primary SCART shall be provided whilst in standby and under pin 8 control. When a SCART output is active but is not the primary output, it is recommended that volume control functions do not apply to that output. Additional analogue and digital audio outputs are optional. Volume controls shall not affect the SPDIF audio level. Audio output requirements are specified further in sections 3.11.3 and 3.11.4 below.
Page 27 of 229
< 26 dB(A)
In all cases, the constraints shall apply across the devices full operating ambient temperature range or up to 45 C, whichever is the higher. Devices may have a fan, provided that they meet the noise level requirements listed above and provided that the fan speed is controlled and arranged to run only as necessary to maintain the internal temperature within its normal operating range.
The Linux kernel shall be suitably configured for use on a platform with limited resources. Device drivers for the following functions are required: SATA USB 2.0 (including hot plug support) Powered and non-powered USB hubs USB mass storage
Page 28 of 229
USB human interface devices (USB keyboard, gamepad and mouse buttons, for accessibility input devices) USB 802.11g and 802.11n adapters with WPA and WPA2 support (may be restricted to adapters requiring specific drivers). Note: support for USB Wifi adapters is not required if the device has built-in 802.11n support. Ethernet, TCP/IP, multicast (both SSM and ASM) IR remote control DirectFB fusion shm/ipc module (version 8.1.1 or later) Watchdog (if supported by SoC) ALSA PCM sound
2
The following filesystems shall be supported: vfat filesystem (for USB devices) Filesystem suitable for large media files (e.g. xfs. Note: ext3 is unlikely to be suitable due to the excessively long time taken to delete large files) Filesystem suitable for general purpose use (e.g. ext3) Filesystem suitable for Flash memory (e.g. jffs2) Filesystem for unpacking platform software images (cramfs)
Filesystems used within the device shall be recoverable in the event of a power failure while the device is in use.
Notes libc, libm, libpthread, librt, libdl, and libresolv. As supplied with compiler. Full requirements are detailed in section 3.8 below. Shared application window manager for DirectFB. IPC library. PNG image decoder. Older versions unsuitable due to security concerns. JPEG image decoder. Zlib compression library. Older versions unsuitable due to security concerns. HTTP client library. For security reasons, the most recent possible version should be used. XML parser Encryption library for TLS, MHEG security etc. For security reasons, the most recent possible version should always be used.
2 3
Where this is not available, it can be omitted but additional porting work will be required for some software components. Version 1.4.3 can be used with the addition of a set of patches available from YouView.
Page 29 of 229
Notes Lightweight database for configuration information. Logging library. ALSA sound API for presentation engines. Text rendering for MHEG.
Additional libraries may be required to support specific presentation engines, uPnP/DLNA and for device setup and configuration. These are not included in this list. For efficiency, shared libraries shall be used in all cases.
3.8.4. Layers
The DirectFB implementation shall provide at least the capabilities to manage a single primary graphics layer. This layer will be used to compose the graphics from various presentation engines. This composition process may use alpha blending and as a result, the graphics layer will always contain alpha-premultiplied pixel data. Consequently, the layer shall support the option to blend with the video layer in manner that takes into account this premultiplication. The graphics layer shall support the following capabilities: DLCAPS_SURFACE DLCAPS_OPACITY DLCAPS_ALPHACHANNEL DLCAPS_PREMULTIPLIED
4
3.8.5. 2D acceleration
The graphics performance of the device depends on the presentation engines having access to the 2D hardware acceleration functions provided by the SoC. Devices shall provide acceleration through a suitable DirectFB graphics driver for at least the following set of operations:
The DirectFB implementation must accept a configuration with this option. The platform must either act on this flag directly or must provide an alternative means for the graphics layer to be configured to handle premultiplied pixel data.
Page 30 of 229
Notes
DSBLIT_NOFX
DSBLIT_BLEND_ALPHACHANNEL
SRC (1, 0) SRC_OVER (1, 1Asrc) SRC (1, 0) SRC_OVER (1, 1Asrc) SRC (1, 0) SRC_OVER (1, 1Asrc) SRC (1, 0) SRC_OVER (1, 1Asrc)
Used to blend images that have an alpha channel and premultiplied pixel data. Used to blend images that have an alpha channel and nonpremultiplied pixel data. Used to apply a constant alpha value to an image that has an alpha channel and premultiplied pixel data. Used to apply a constant alpha value to an image that has an alpha channel and nonpremultiplied pixel data. Required to be supported in combination with all blending flag combinations listed above.
DSBLIT_SRC_COLORKEY
None
Required only if colour keying is more efficient than blending and then only if subtitles must be rendered on the same graphics plane as the UI. Otherwise not required.
The performance of these hardware accelerated functions shall be at least as specified in the following table for all pixel formats that are required to be accelerated (see below):
Operation Rectangle fill Rectangle fill (blended) Blit Blit (blended) StretchBlit upscale (with and without blending) StretchBlit downscale (with and without blending) Performance required 200 Mpixel/sec 120 Mpixel/sec 120 Mpixel/sec 100 Mpixel/sec At least the same as a straight Blit of the destination size At least the same as a straight Blit of the source size
These figures shall be achievable when the device is showing broadcast SD video. Graphics performance may degrade during HD video playback, or playback of content from IP. Accelerated blitting operations shall be supported for at least the following pixel formats: ARGB and ARGB4444 (as source and destination formats) and A8 (as a source format and for A8 to A8 copy). If
Page 31 of 229
subtitles must be rendered on the same graphics plane as the UI, LUT8 is also required as a source format. The results of hardware accelerated operations shall agree with the DirectFB software renderer to within the following tolerances:
Operation Simple blit with no blending All other operations Destination pixel format Any ARGB ARGB Not ARGB Source pixel format Same as destination ARGB Not ARGB Any Colour max RMS error Zero 2% 2% 10% Alpha max error Zero 1% 7% 7%
3.8.6. Plug-ins
DirectFB image providers shall be provided for at least the following formats: PNG, JPEG and GIF. Where hardware image decoding is available, this should be supported through DirectFB. Presentation engines are recommended to use DirectFB for image decoding wherever possible in order to take advantage of any hardware acceleration that is available.
3.8.8. Clipping
It is strongly recommended that devices implement clipping in a manner that meets the following requirements: pixels outside the defined clipping region are unmodified by blitting and drawing operations pixels within the clipping region are rendered in exactly the same way as they would have been were the clipping region not set. Note: special care must be taken to comply with this requirement when performing StretchBlit operations where the clipping region does not align with the destination rectangle.
Implementations in which clipping does not satisfy these requirements will necessitate less efficient rendering techniques for presentation engines.
Page 32 of 229
for interlaced displays. The graphics system is performance critical. Devices shall optimise the graphics path appropriately for the particular SoC used.
The following features are not required and should be disabled: FTP, ldap, ldaps, dict, telnet, tftp, sspi, gnutls, nss, libidn
The H.264/AVC video decoder shall support dynamic changes of bitrate and decoded frame sizes that occur at IDR points.
Page 33 of 229
Audio decoders shall support dynamic changes of encoded bitrate between access units. The AAC decoder shall support seamless transitions between access units that use SBR and those that do not. HE-AAC decoders shall support dynamic range control as described in DTG D-Book 7 Part A v1 section 4.4.1.2.
Stereo PCM
Delay HDMI
Audio description
Stereo PCM
Delay SPDIF
Figure 2 : Audio flow logical model where the main programme sound is stereo
Stereo analogue
D to A
SCART/RCA
Stereo PCM
5.1 PCM Main programme sound Delay Encode AC3 or DTS HDMI
AC3 or DTS See note 2 Audio description Notes: 1. This mode is required by DTG D-Book 7 but is not recommended unless specifically requested by the user as it does not allow for mixing with other audio sources on the receiver. Audio description may be routed to all outputs, to SPDIF only, or disabled. When audio description is enabled, receiver may restrict all outputs to stereo. Stereo PCM Delay SPDIF
Stereo Multi-channel
2.
Figure 3 : Audio flow logical model where the main programme sound is multi-channel
Audio delay for HDMI is controlled by the HDMI Auto Lipsync Correction feature. Audio delay for analogue and SPDIF outputs is controlled by a user configuration setting.
Page 34 of 229
Volume level normalisation is required in order to ensure that the audio level at the output of the device is the same for all audio codecs. This is achieved by applying a gain factor that is the difference between a target level for a particular output and the reference level of the audio source. The gain factor may be positive or negative. Note: the audio level adjustments required to meet these requirements are not shown on the diagram but are described below. E-AC3 services shall be normalised using the reference level indicated by the dialnorm parameter within the E-AC3 bitstream. HE-AAC services shall be normalised using the reference level indicated by the prog ref level parameter within the HE-AAC bitstream. MPEG-1 layer II services shall be assumed to have a reference level of -23 dB FS. Devices shall assume that non-programme sound is mixed to a programme reference level of -23 dB FS so as to match standard definition MPEG-1 layer II audio services. At the SPDIF output and for HDMI outputs where the sink supports bitstream audio, audio shall be normalised such that the reference level corresponds to a target level of -31 dB FS. For HDMI outputs where the sink does not support bitstream audio and for all analogue outputs, the audio shall be normalised such that the reference level corresponds to a target level of -23 dB FS. As an example, for an HE-AAC bitstream with a prog ref level parameter of 0x7c (-31 dB FS) going to an HDMI output where the sink does not support bitstream audio, a gain factor of +8 dB shall be applied. Note: bitstreams can be expected to carry dynamic range control signalling that will ensure that clipping does not occur as a result of the normalisation process. See also DTG D-Book 7 Part A v1 section 4.4.1.2. The diagrams do not show any sample rate conversion but this may be required where different audio sources have a different sample rate. Non-programme sound may originate from any presentation engine and there may be multiple sources active at any time. For the purposes of these diagrams, all non-programme sound is assumed to have been premixed to stereo PCM. Devices may support additional audio output options to those shown.
Page 35 of 229
etc.
DirectFB window
Graphics plane
Multiple components in the YouView software architecture need to access the primary video decode path. Devices shall implement a resource management function such that each such component can use the hardware resources associated with this path. The resource management function is not required to support multiple components controlling the primary decode path at the same time.
Page 36 of 229
are controls that are expected to be adjusted by the viewer; those shaded blue are expected to be adjusted by application software.
Video Format conversion Scale V-P3
V-S1
Video decoder
V-P1
V-P2
VIDEO OUT
Audio Audio decoder A-P1 AD-E2 A-S1 See Note ii SPDIF Audio description Note: this is a simplified view of the audio system. See also section 3.11.3. AD-P1 A-P2 MAIN AUDIO
A-S2
AD-S2
AD-E1
AD decoder
AD-S1
Subtitles
S-S2
S-E1
Subtitle decoder
S-S1
Notes: i. Presentation engines may output stereo PCM audio samples to be combined with audio from the media pipeline. Please refer to the appropriate presentation engine integration specification for details. The audio description decoder (marked AD decoder) is able to adjust the volume of the audio coming out of the main audio decoder as part of its built-in behaviour. This is depicted by the dotted arrow. The audio volume control A-P1 affects the audio level after this has happened.
ii.
Page 37 of 229
Function This is an application-controlled audio component selector. It can select between any of the audio components from the selected source, the default component from A-S1 or nothing. This is the application-controlled audio volume control. It is specific to the audio coming from the audio decoder and does not affect other audio sources on the device. This is the viewers master volume control and controls all audio outputs that are controllable. This is the viewers audio description language preference (which may be set automatically to match A-S1). This is an application-controlled audio description component selector. It can select between any of the audio description components from the selected source, the default component from AD-S1 or nothing. This is the viewers audio description enabling control. This is the viewers audio description volume control. It adjusts the audio description volume independently of the main audio volume. The control defaults to following A-P2. The viewer controls the offset that makes the audio description quieter or louder relative to the main audio. This control determines whether audio description appears on all outputs or only on the SPDIF output. This is the viewers subtitle language preference (which may be set automatically to match A-S1). This is the application-controlled subtitle component selector. It can select between any of the audio components from the selected source, the default component from S-S1 or nothing. This is the viewers subtitle enabling control. Table 11 : Presentation controls
AD-E1 AD-P1
S-E1
The core device software image is managed by the device manufacturer. Devices must be able to update their software image.
Page 38 of 229
Devices shall be tolerant of power interruption at any time. More detailed specification of the device upgrade requirements can be found in Chapter VI Consumer Device Software Management.
3.13. Security
Note: nothing in this section implies that any of the keys or identifiers listed here will be made available to application-level software. The device shall use a secure boot mechanism to ensure that only manufacturer-signed software images can be run on the device. JTAG shall be disabled or password protected in production devices. The device shall have a unique encryption key known to the manufacturer held securely and not available directly to the application CPU. The device shall have a unique immutable ID available to software running on the box and known to the manufacturer. This ID shall not be the devices MAC address. The device shall have a unique RSA private key and corresponding X.509 certificate signed by the manufacturer. The RSA private key shall not be stored unencrypted in Flash memory or on the hard disk. No key described above shall be present unencrypted in the device software image, nor in any device upgrade file or over-air download carousel. Devices shall support hardware-accelerated image decryption and signature verification, including of the root file system, other file systems and the platform software. The device shall be capable of supporting DTCP-IP. Content protection mechanisms for broadcast-delivered content shall be as specified in DTG D-Book 7 Part A v1. Content protection mechanisms for IP-delivered content shall be as described in Chapter X Content Protection : AV Over IP.
Page 39 of 229
Close
Allow the user to immediately exit an application, EPG or other user interaction function. Display the top level user interface menu. Show the main programme guide. Mute the audio output. Access contextual help. Keys used to provide user interaction to a variety of device functions.
Menu Guide Mute sound Help Cursor control keys (Up, Down, Left, Right)
DIKS_MENU DIKS_EPG DIKS_MUTE DIKS_HELP DIKS_CURSOR_UP, DIKS_CURSOR_DOWN, DIKS_CURSOR_LEFT, DIKS_CURSOR_RIGHT DIKS_OK DIKS_BACK
OK Back
Allow the user to confirm or select a particular screen choice or action. Allow the user to move back one step in an interactive application, EPG or other user interaction function. Increase or decrease the audio level. Step up or down to the next service available to the user, normally ordered by number. Enter an interactive application while viewing a channel. Display programme information. Accessibility and web view management. Keys to control playback of recorded or streamed content. Keys to skip forward and back by 30 seconds. Buttons available to device functions to aid user interaction. May also be used to enter an interactive application while viewing a channel. Primarily for numeric entry but also labelled so as to support text entry.
DIKS_VOLUME_UP, DIKS_VOLUME_DOWN DIKS_CHANNEL_UP, DIKS_CHANNEL_DOWN DIKS_TEXT DIKS_INFO DIKS_ZOOM DIKS_REWIND, DIKS_PLAY, DIKS_FASTFORWARD, DIKS_STOP, DIKS_PAUSE, DIKS_RECORD DIKS_NEXT, DIKS_PREVIOUS DIKS_RED, DIKS_GREEN, DIKS_YELLOW, DIKS_BLUE
Numeric entry
DIKS_0 DIKS_9 with key identifiers set to the values DIKI_KP_0 through DIKI_KP_9
Page 40 of 229
Function Dual function: Shift (during text entry) Audio Description (toggle) Dual function: Delete (during text entry) Subtitles (toggle) Search On Demand ISP button MyView Retailer / device manufacturer brand shortcut
Description
Toggle upper and lower case text entry. Enable/disable audio description. DIKS_SUBTITLE Delete a character from a text field. Enable/disable subtitles. Short cut to search function within the YouView UI. Short cut to on demand section of the UI. Affiliate ISP short cut to broadband services page within the YouView UI. Short cut to the MyView section of the UI. Short cut to retailer / device manufacturer brand presence within the YouView UI. DIKS_GOTO DIKS_F2 DIKS_F3 DIKS_F4 DIKS_F5
Any additional manufacturer-specific keys shall use DirectFB key symbols in the range DIKS_CUSTOM0 to DIKS_CUSTOM9.
Page 41 of 229
The remote shall be capable of single-handed operation by either hand, easy to grip and stable if placed on a flat surface. It should be non-slippery, for example by means of a textured finish. Care should be taken to ensure that the directional properties of the communications link from the remote control to the device are as wide-angle as possible. If batteries are supplied, they should provide a minimum lifetime of 12 months with typical remote control use.
Page 42 of 229
Chapter III
Consumer Device Software Architecture
Page 43 of 229
1. Chapter Summary
This chapter provides an overview of the Consumer Device Software Architecture, and key operational requirements for a YouView device, including message-based communication, and management of system time .
This chapter is not relevant for: Application Developers ISPs Content Providers
Page 44 of 229
2. Introduction
This chapter presents an architecture that has been developed in response to a number of requirements and challenges that include: Bringing together broadcast and IP delivery technologies, to ensure effective co-existence. Supporting multiple application frameworks and presentation technologies. Supporting multiple concurrent applications. Integrating components from third party suppliers. Enabling hardware graphics acceleration and making it easily accessible to ensure the best possible viewer experience. Reducing fragmentation of technology in the connected television ecosystem, to maximise availability of content, and reduce content distribution costs. Improving device compliance, to reduce authoring costs for content providers.
Traditional monolithic middleware solutions do not provide a suitable foundation on which to build a solution that addresses all these requirements. Software designs with a single main executable and tight coupling between functional areas often exhibit problems in areas such as memory management, security, resource management, and make it difficult to integrate newer technologies, so a more modular approach is needed. However, the extent of the investment made by Device Manufacturers and software vendors in existing technology is acknowledged and the principle of reusing existing software is central to this architecture. This chapter defines key technology foundations and outlines a solution that reduces the risk and complexity associated with software integration. This includes the use of the Linux operating system, a multi-process model, a message bus for inter-process communication and a set of common opensource libraries.
Page 45 of 229
Platform Configuration
Platform Applications
Message Bus Software Component (e.g. Media Playback) Software Component (e.g.DVR) Software Component (e.g. Metadata)
Hardware
The architecture has the following benefits over a monolithic approach: It allows software components and Applications to be developed in isolation and tested before final integration. It allows for the re-use of existing software components and simplifies the integration of third party software. It supports models where the Platform Operator is able to manage and update Platform Applications and Platform Configurations independently.
Page 46 of 229
Linux is deployed on millions of PCs and consumer electronics devices, and the skills to develop and optimise for it are common in the industry. In addition, a wide range of open-source products have been developed for, or ported to Linux. The Linux kernel and core libraries provide facilities for executing and scheduling multiple processes which enables the device to support multiple application frameworks and presentation technologies, and content from multiple sources. This allows applications and other software service components to run in separate operating system processes with appropriate permissions to control access these services and the underlying system resources. This is essential for enabling a wide range of content from trusted and un-trusted sources. It also allows Platform Applications that provide the user interface to be developed and maintained independently without requiring the Core Device Software.
Page 47 of 229
4. Message Bus
4.1. Message Bus Overview
The primary mechanism for inter-process communication (IPC) for device software components is based on D-Bus, an open-source project from freedesktop.org that provides transport for remote method calls, asynchronous events, and error messages. D-Bus has gained widespread support in Linux based systems, and is used in the majority of desktop and embedded Linux distributions. It was chosen as the IPC mechanism as it supports a suitable range of data types and usage patterns.
Devices may be configured with additional Message Buses to support Service Provider or Manufacturer specific features.
Clients may block and wait for the return value, or to provide a callback function to receive the return value asynchronously.
Page 48 of 229
The device shall support server (adaptor) API bindings to D-Bus with the following features: D-Bus signal emitting from C++ event handling code. D-Bus error generation from to C++ exception and error handling code. Processing D-Bus method calls in the main message dispatcher thread. Processing D-Bus method calls in separate worker threads.
Page 49 of 229
5. Device Operation
5.1. Time
5.1.1. Sources of time
Devices shall support setting the devices clock using the following methods: Parsing of a DVB TDT or TOT received in a broadcast transport stream From a time server using SNTP version 4 as defined in IETF RFC 5905 From a time server using NTP version 4 as defined in IETF RFC 5905
If the device is operating in a mode where broadcast or IP connectivity is not available, sources requiring that connectivity shall be ignored. Devices shall implement non-volatile storage for the last known time, for the purposes of detecting roll-back of time.
5.1.5. SNTP
When configured to use SNTP, implementations shall not use a full NTP client to set the time. Note: this specifically excludes the use of ntpdate. An example SNTP client is available in the reference NTP software distribution from ntp.org. Devices shall retry up to 5 times at one-second intervals to each configured server. Acquisition of time using SNTP shall be considered to have failed if no time has been received from a configured server within that time, or if the device does not have IP network connectivity. When SNTP time is configured, devices shall attempt to resynchronise the system clock at regular intervals, with an additional offset of a randomly chosen number of seconds uniformly distributed between 0 and 3599 to add to the configured polling interval in order to prevent bursts of requests to the servers.
Page 50 of 229
Page 51 of 229
Chapter IV
Consumer Device IP Networking
Page 53 of 229
1. Chapter Summary
This chapter covers the specification and configuration of the core network stack and defines mechanisms for managing the resources available to the devices IP connection. This chapter does not cover hardware requirements and network interfaces (which are covered in Chapter II Consumer Device Platform) or the provisioning of the device with configuration information (which is addressed by Chapter VI Consumer Device Software Management).
This chapter is not relevant for: Content Providers Application Developers ISPs
Page 54 of 229
2. IP Network Stack
2.1. Core specification
Devices shall support IPv4, ICMP, IGMPv3, TCP and UDP. Higher level protocols are specified elsewhere in this chapter. Support for IPv6 will be required at a later date. The IP network stack shall support filtering, prioritisation and measurement of traffic based on protocol, source, destination and originating process.
2.3. Multicast
Multicast shall be fully supported at the Ethernet and IP layers. Both Source-Specific Multicast (SSM) and Any-Source Multicast (ASM) shall be supported. This requires the use of IGMPv3.
2.4. IPv6
Support for IPv6 will be required at a later date. IPv6-capable devices shall support the requirements of the IPv6 Node Requirements contained in the IETF draft update to RFC4294, known as RFC4294bis, with support for Neighbour discovery, Path MTU Discovery, Stub-resolver and DHCPv6. DHCPv6 configuration shall be attempted to obtain DNS server addresses. If this is unsuccessful, IPv4 DNS server addresses shall be used.
2.5. Performance
The IP stack shall be capable of handling a throughput of at least three concurrent 10 Mbps TCP sessions. It is anticipated that content bitrates could range from around 700 kbps up to 10 Mbps for future HD content.
Page 55 of 229
Devices shall be able to handle an encrypted TLS session with a throughput of 8 Mbps (RC4 cipher) or 4 Mbps (AES128 cipher) without significant impact on the responsiveness of other device functions.
2.6.5. Cookies
Some web services need to store information locally on the client to establish an HTTP session that spans a number of HTTP requests over a specified period of time. Devices shall support session and persistent cookies via the Cookie request header and Set-Cookie response header as defined by IETF RFC 2109. Devices shall support session cookies for HTTP requests made by all software components on the device. Unless otherwise stated for particular uses of HTTP, session cookies shall be retained until the device next enters standby and then discarded. Devices shall support persistent cookies via the Max-Age directive according to IETF RFC 2109 for all HTTP requests made from presentation engines but not for other uses of HTTP. Persistent cookies shall remain available after the device is restarted and shall survive a power cycle. Devices shall maintain separate cookie stores for holding persistent cookies created by different applications and different presentation engines. Devices shall not accept cookies that have no path attribute. Devices shall accept cookies in the "Netscape" format in order to provide compatibility with older HTTP/1.0 servers.
2.6.6. Timeouts
Except where otherwise stated, timeouts shall be as defined in DTG D-Book 7 Part A v1 section 17.12.
Page 56 of 229
2.6.7. Redirection
HTTP/1.1 3xx redirection response codes shall be supported wherever HTTP is used. This includes requests for adaptive bitrate manifests and segments. Devices shall support chains of at least 5 redirections. Redirection from an http: URL to an https: URL and vice versa shall be supported. Implementations shall detect infinite loop redirections, terminate the infinite loop and report the failure to the component making the request. If a 300 Multiple Choices response code is received and the particular use of HTTP does not provide any means to choose between the representations offered, the response shall be treated as a 302 response if it contains a Location header and considered an error if it does not.
2.6.8. Encodings
YouView devices shall offer at least gzip and deflate encodings in an Accept-Encoding header for all HTTP requests that could return text content including requests for adaptive bitrate manifest files (see Chapter IX IP Content Delivery), requests for timed text subtitle files (see Chapter IX IP Content Delivery) and requests from presentation engines. Devices shall support HTTP responses with a Content-Encoding header containing gzip or deflate. Note: libcurl supports these encodings transparently but this must be enabled by setting the CURLOPT_ENCODING option to an empty string.
Page 57 of 229
3. IP Resource Management
3.1. Introduction
The devices IP network stack is used for a variety of purposes, some of which are time critical and some of which are not. In addition, the user may have a broadband tariff in which traffic is charged at different rates at different times of day. This section describes how the device manages usage of the IP connection. Three classes of traffic are defined: Time Critical Traffic High priority Background Downloads Low priority Background Downloads
The mapping of individual cases of network use to these classes is outside the scope of this chapter.
Page 58 of 229
Page 59 of 229
Chapter V
Consumer Device UI Model
Page 61 of 229
1. Chapter Summary
This chapter defines a device UI model for how the user experience provided by a consumer device can be realised. The model supports a federated approach where different parties provide elements of the user experience. It also describes how these elements need to integrate together so as to result in a unified, seamless experience. The model provides a means for the Platform Main UI provider to deploy a common user experience across a range of devices from different manufacturers.
Page 62 of 229
Page 63 of 229
3. UI Model Implementation
3.1. Introduction
The Device UI Model supports a federated approach where different parties provide the applications comprising the overall user experience. It also allows more than one party to contribute to the implementation and/or configuration of a particular application to support extension and customisation of the user experience.
3.2.1. Setup
The Setup application uses a wizard paradigm to step the user through the minimum number of user settings required to configure the device into a usable state.
Figure 7 : Setup
3.2.2. Settings
The Settings application provides the user with access to the full range of user settings within a hierarchical structure. Local Diagnostics functionality aims to help users solve problems of device operation, e.g. check broadcast signal strength, check IP network activity.
Page 64 of 229
Figure 8 : Settings
Page 65 of 229
Page 66 of 229
Chapter VI
Consumer Device Software Management
Page 67 of 229
1. Chapter Summary
This chapter describes how software updates and configuration settings are managed on the device. Core Device Software is installed and managed by the device manufacturer Platform Software contains the Platform Main UI and can be updated by the platform operator independently of manufacturers and without upgrading the Core Device Software Device configuration files contain parameters used to control aspects of the devices operation without needing to update either the Core Device Software or the Platform Software
This chapter also introduces the logical components : Local Storage Repository (LSR) Software Manager
1.1. Audience
This chapter is for: Device Manufacturers, who need to: Implement a robust mechanism for discovery, acquisition, verification and activation of updated versions of Core Device Software and Platform Software Implement support for the discovery and acquisition of updated versions of configuration files and store the updated configuration on the device
ISPs, to understand the device configuration mechanism and the means by which ISP-specific configuration is supported.
Page 68 of 229
1:n
Platform Software
1:n
Device configuration
Page 69 of 229
The viewer's Internet Service Provider (ISP) and The viewer themselves
The logical components that are involved in device configuration are shown in Figure 12. The Local Storage Repository (LSR) represents a repository containing the device configuration and is also queried to get information about the state of the device. The YouView Software Manager (YVSM) interface in Figure 12 is described in this specification and is the interface used to download updated versions of Platform Software and YouView configuration.
YouView
Back-end functionality Client-facing interface
Device
Platform Main UI
Web servers
Software Manager
Reads config
ISP
Back-end functionality Client-facing interface
Web services
ISP Config
Page 70 of 229
Page 71 of 229
Page 72 of 229
they are available, will set the Update Time to the current time and immediately attempt to initiate their acquisition. 4.1.1.7. Opportunistic Update Check Window The Opportunistic Update Check Window is a period of time immediately preceding the start of the Update Window. The start of the Opportunistic Update Check Window is calculated by subtracting the Opportunistic Check Window Duration from the Update Window start time. During this window the Software Manager component shall not perform update acquisition. 4.1.1.8. Opportunistic Update Check Time The Opportunistic Update Check Time is a Distributed Time within the Opportunistic Update Check Window at which the Software Manager component shall check for the availability of updates but shall not perform update acquisition. 4.1.1.9. Update Check Window The Update Check Window is a period of time commencing at the start of the Opportunistic Update Check Window and ending at the end of the Scheduled Update Check Window.
Page 73 of 229
Page 74 of 229
A Platform Software acquisition can also end up in one of the following error states. ACQUISITION_FAILED: The device has failed to acquire, verify and activate the updated version of Platform Software.
Page 75 of 229
${platform_software} ${policy}
Examples of URL template are: ${baseurl}/${compatibility_version}/${platform_software}.xml and ${baseurl}/${compatibility_version}/${platform_software}.xml?oem=${manuf acturer}&model=${model} When downloading the Platform Software Image Manifest, if the HTTP response contains an ETag header the device shall store that ETag in the LSR and include that ETag in an If-None-Match header for all subsequent requests to check for the Platform Software Image Manifest. The device shall take the actions in the following table based on the HTTP response status code returned when downloading the Platform Software Image Manifest.
HTTP Response 200 OK 304 Not Modified Description The manifest file has been downloaded successfully. Occurs when the device presents the server with the ETag of the previous request. The Platform Software Image Manifest hasnt changed since the last time the device requested it. Any other HTTP status code including HTTP redirects and errors. Device action Proceed to the next step in the process of upgrading Platform Software. The manifest file hasnt changed so there is no updated version of Platform Software available for this device.
The device shall deal with any other status codes as described in Chapter IV Consumer Device IP Networking.
The Platform Software Image Manifest HTTP response shall be Content-Type: text/xml.
Page 76 of 229
The body of the HTTP response shall be the Platform Software Image Manifest in the format described in section 6.2.3. If the device receives a HTTP 200 or HTTP 304 response code from the request for the PSI manifest it shall store the current date and time in the LSR. In the event that a download of the Platform Software Image Manifest fails on five consecutive occasions, the device shall revert to the base URL and URL template configured in the Platform Provisioning File for the next attempt.
Page 77 of 229
5. Check the preconditions on the verified PSID as described in section 6.4.1 If any of the verification steps above fail, then the installation of the updated version of Platform Software shall be terminated.
6.4.2. Decryption
TK1 shall be decrypted using the devices Platform Software private key. The Platform Software image shall be decrypted using TK1. The algorithm used shall be AES128-CBC.
6.4.3. Verification
Devices shall calculate a message digest of the signed content and verify the digital signature as per RFC 5652 Section 5.6. Devices shall assert that the Signed-data section (RFC 5652 Section 5) has been signed by the platform operator. The certificate policy term 1.2.826.0.1.7308805.1.1 shall be explicitly present in the signer's certificate. Intermediate certificates shall either have the same policy term or the RFC 5280 defined anyPolicy. Devices shall check the presence of this policy and its chain and if it is not present verification shall fail.
Page 78 of 229
Page 79 of 229
Item Platform Software image minor version Platform Software image revision Capabilities
Contents Platform Software minor version (e.g. 0x01) Platform Software revision (e.g. 0x00, 0x00) 0x00 (all 8 bytes) Table 16 : Platform Software Image Descriptor
The Platform Software packaging format version identifier is used to indicate the version of the packaging format used by the Platform Software image. Versions of the Platform Software and the Required Compatibility Version shall be identified by major, minor and revision numbers. The device shall only install a Platform Software if the Compatibility Version is supported by the Core Device Software. The Platform Software image version number will increment with each updated release. The Capabilities item is a reserved section of the PSID which will be used at a later date to indicate the device capabilities which are compatible with a specific version a Platform Software image. Multi-byte fields in the PSID shall be big-endian unsigned integers. 6.7.3.1. Encapsulated Content The eContent section (RFC5652 Section 5.2) of the CMS Signed-data section shall be of the form:
Item Platform Software Image Descriptor (PSID) CramFS image Offset (bytes) 0x00 0x20 Table 17 : Encapsulated Content Size (bytes) 0x20 Variable
The PSID is duplicated in the encapsulated content section since this part of the message is verifiable. 6.7.3.2. Signing The platform operator shall provide a Platform Software signing X.509 (RFC 5280) certificate containing an RSA 2048 bit public key in the SignerInfo section of Platform Software image. The private component of the Platform Software signing certificate key shall be used to compute the signature. The digest algorithm shall be SHA256. The signature algorithm shall be RSASSA-PKCS1-v1_5. The SHA256 digest shall be computed over the plain-text Platform Software image. The Platform Software signing certificate and any intermediate certificates up to but not including trust anchor shall be carried in the certificates component of the signed-data. 6.7.3.3. Encryption Manufacturers shall provide the platform operator with one or more key transport keys. Each key transport key (KTK) shall be a 2048 bit RSA public key.
Page 80 of 229
The private key component of the device KTK shall be stored as a device secret. The platform operator shall generate a new random content encryption key for each distribution of the Platform Software image. The content encryption key shall be a 128 bit key. The Content Encryption Key (CEK) shall be encrypted by the platform operator once for each KTK provided to the platform operator. The packager shall encrypt the CEK with RSAES-OAEP under each of the manufacturer supplied public keys. Each encrypted CEK shall be packaged in a KeyTransRecipientInfo structure. This structure shall use the subjectKeyIdentifier supplied in the OEM public key certificate as the RecipientIdentifier. The Platform Software image will be encrypted using the CEK. The bulk encryption algorithm used shall be AES-128-CBC. The block termination padding algorithm shall be the algorithm defined in RFC 3852.
Page 81 of 229
7. Device configuration
7.1. Overview
For the efficient operation of the platform, a number of configuration parameters are required. A configuration mechanism is specified that allows for configuration parameters to be adjusted by a number of configuration sources. The configuration sources are the device manufacturer, the platform operator, the viewer's Internet Service Provider (ISP) and the viewer themselves. To enable this, the device configuration contains a set of defined base URL and URL template from which configuration data must periodically be gathered A set of named configuration parameters and the configuration sources that have permission to set each of these parameters is defined by the platform operator. Sources of configuration data are processed in the specific order outlined in section 7.1.1.2 to arrive at a merged database of configuration values. The order in which the configuration sources are applied determines which configuration sources are able to set and overwrite values set by another configuration source. When configuration updates are processed, the access control mechanism described in section 7.2 is used to ensure that parameters are overwritten only where there is permission to do so. This configuration merging operation is repeated when any one of the configuration sources has changed and is successfully downloaded.
The manufacturer shall apply its default configuration to the Local Storage Repository before applying any other local or remote configuration. The platform operator default configuration shall be contained in the Platform Provisioning File provided as part of the Platform Software. 7.1.1.2. Remote configuration sources The remote configuration sources used to configure the device and are applied in the order defined here. Platform Configuration is provided on the network by the platform operator at a pre-defined location. This location defines the URI the device uses to check for updates to the platform configuration and download the updated configuration file if one exists. Manufacturer Configuration may be provided on the network by the manufacturer at a pre-defined location. ISP Configuration may be provided on the network by the viewer's ISP if the ISP is a participating ISP. The device uses a single pre-defined URI to check for updates to the ISP configuration and download the updated configuration file if one exists. Redirection or other means allow the file to be served by the ISP themselves. Figure 14 shows how the sources of configuration information are managed and the order in which they are applied to the Local Storage Repository:
Page 82 of 229
Applications
config.manufacturer.com
Active Configuration
User settings
Cert
Config 3
7.1.1.3. Viewer and Application Configuration The viewer shall have the ability to change device configuration. These configuration parameters may be set by the Platform Main UI or any other application with the appropriate permissions. When a configuration parameter is retrieved from the Local Storage Repository (LSR) and that value has previously been set by any application other than the configuration update process then the device shall return the configuration parameter value saved by the application. Otherwise the value from the active configuration created by the configuration update and merge process shall be returned. Configuration parameters that are set by an application other than the configuration process shall not be overwritten by configuration updates, Core Device Software updates or Platform Software updates. All viewer settings shall be deleted when the device performs a factory reset. See Chapter VII Consumer Device Storage Management.
Page 83 of 229
shall not contain previous values from a remote configuration source that has responded with a 204 No Content HTTP response. After a successful download of a configuration from the platform operator the device shall store the ETag in the LSR for use during future configuration updates. If an ETag header was provided with a previous response from a specific configuration source, the device shall add an If-None-Match HTTP header to the request quoting the ETag value for that configuration source. If an ETag is not available from a previous request the device shall send an If-Modified-Since header quoting the date and time at which the last successful download of the relevant configuration was achieved. The device shall not send a request with both If-None-Match and If-Modified-Since headers. Devices shall check for updated configuration and apply it from all power modes. i.e. Unless the device is physically powered off it shall update the device configuration. Devices shall implement the back-off mechanism for HTTP errors and HTTP redirections as described in Chapter IV Consumer Device IP Networking. Platform configuration updates can change the base URL and URL template of the configuration update service. If a platform or ISP configuration update base URI fails five times the device shall use the base URI of the last successful configuration update. If the base URI of the last successful configuration update fails then the device shall use the base URI provided as part of the default platform configuration. i.e. The Platform Provisioning File. If the download fails due to an authoritative response that the domain name (or a domain name referenced in an HTTP redirect) does not exist, the device shall treat that source as providing an empty configuration file and continue with the configuration update. If the download fails for any other reason (including but not limited to any non-authoritative DNS error, TCP/IP connection error, HTTP 4xx or 5xx response), devices shall treat this as a transient failure and continue to use the most recent successfully-merged configuration file for that configuration source, if one is available.
${platform_software} ${config_name}
Page 84 of 229
Examples of the URL template are: ${baseurl}/${manufacturer}/${model}/${core_software}/${platform_software}/$ {config_name} or ${baseurl}/${config_name}-${platform_software}.cms The device shall be configured with a secondary base URI for platform configuration updates and if configuration updates fail using the first base URI the device shall attempt the update using the secondary base URI. When the device checks for platform configuration it shall update the last checked time stored in the LSR. When the device successfully downloads and applies updated platform configuration to the device it shall save the current time in the LSR.
${platform_software} ${config_name}
Examples of the URL template are: ${baseurl}/${manufacturer}/${model}/${core_software}/${platform_software}/$ {config_name} or ${baseurl}/${config_name}-${core_software}-${platform_software}.cms When the device checks for ISP configuration it shall save the last checked time in the LSR. When the device successfully downloads and applies updated ISP configuration to the device it shall save the current time in the LSR.
Page 85 of 229
The device may check for an updated Manufacturer Configuration and download it using a URL constructed from the URL template contained in the LSR. The URL template consists of a number of URL template variables that shall be replaced with the appropriate values. The following table shows the variables that shall form part of the URL template.
Variable ${baseurl} ${manufacturer} ${model} ${core_software} ${compatibility_version} Description The Base URL of the OEM configuration check and download. The manufacturer name URL encoded The URL encoded device model number The version number of the Core Device Software currently installed and running on the device This identifies the Compatibility Version supported by the Core Device Software in the format: [major_version].[minor_version].[revision] The version of the Platform Software currently installed and running on the device. The name of the OEM Configuration location that should be checked for an update. Table 20 OEM Configuration URL Template Variables
${platform_software} ${config_name}
7.3.1. Authentication
Configuration files downloaded to the device shall be signed. Configuration is signed by the organisation that created it using their configuration signing key and packaged with their corresponding certificate signed by the platform operator. The device shall successfully verify the signature, certificate chain and certificate policy before applying the configuration parameters. If verification of the signature fails or verification of the certificate chain fails, the configuration parameters shall not be applied to the device. The certificate policy term for platform configuration 1.2.826.0.1.7308805.1.2 shall be explicitly present in the signer's certificate when verifying platform configuration. The certificate policy term for ISP configuration 1.2.826.0.1.7308805.1.3 shall be explicitly present in the signer's certificate when verifying ISP configuration. Intermediate certificates shall either have the same policy term or the RFC 5280 defined anyPolicy. Devices shall check the presence of the relevant policy when verifying the signing certificate and if it is not present verification shall fail. The signature method is CMS as defined in RFC 5652 with an unencrypted data section and using RSA. The configuration signing certificate shall be identified as the SignerIdentifier, as described in RFC5652 section 5.3. This configuration signing CA certificate shall be provided in the CMS Certificates section.
Page 86 of 229
During a configuration update any applications accessing the Local Storage Repository shall continue to retrieve the previously configured parameters.
Page 87 of 229
Chapter VII
Consumer Device Storage Management
Page 89 of 229
1. Chapter Summary
This chapter specifies requirements for storage systems integrated into the Consumer Device, and associated data management functions.
1.1. Audience
This chapter is for: Device Manufacturers
This chapter is not relevant for: Application Developers ISPs Content Providers
Page 90 of 229
2. Storage Types
2.1. Writeable storage types
Devices shall support the following writeable storage areas: Writeable data only accessible by the Core Device Software and further restricted on a perprocess group membership basis. Writeable data accessible by Platform Applications, but not accessible by Content Provider Applications. Writeable data accessible by Content Provider Applications. Temporary storage in memory Writeable media data used to store recorded or downloaded content.
Page 91 of 229
This is initiated by pressing a button or combination of buttons on the front panel of the unit. As no interaction with an on-screen UI is required, this mechanism can be used even if a display is not connected. UI Initiated Factory Reset.
This is initiated by selecting an option in the Settings Menu of the Platform Main UI. This mechanism provides the capability to optionally give the viewer the choice of removing or retaining certain items. The details of controlling access to such a feature or the on-screen warnings to prevent misuse are beyond the scope of this specification.
Page 92 of 229
Page 93 of 229
clear private data only keep keep keep keep keep keep keep clear
optional optional
keep clear
Page 94 of 229
Chapter VIII
Broadcast Content Delivery
Page 95 of 229
1. Chapter Summary
The DTG D-Book specifies the "Requirements for Interoperability" that have been adopted by multiplex operators in implementing digital terrestrial television services within the UK. This chapter complements the functionality described in DTG D-Book 7 Part A v1 in order to support effective integration of broadcast delivery in a connected television environment.
1.1. Audience
This chapter is relevant to: Device Manufacturers who need to: Implement the necessary support for DTG D-Book functionality including support for MHEG red button services. Implement support for MHEG Interaction Channel. Implement extensions to the MHEG presentation engine to enable the launch of IP-delivered applications from broadcast services.
Content Providers who wish to: understand what support is provided for broadcast content delivery.
Application developers who wish to: launch IP-delivered applications from broadcast services.
Page 96 of 229
2. D-Book Compatibility
YouView DTT HD DVR devices shall be implemented in line with DTG D-Book 7 Part A v1 and shall support all the functionality required by Section 22.4 (which in turn builds on 22.1, 22.2 and 22.3) including: Support for the Guidance descriptor; section 22.1.3.5.4 Trailer Booking (Green button); section 22.2.3.12 Recommendations; section 22.2.3.6 SD/HD Event Linkage; section 22.3.3.2 MHEG Interaction Channel; section 22.3.5.2 ICEncryptedStreamExtension is not required for launch Audio Description for HD; section 22.3.1.4 Native Bitstream Audio over HDMI; section 22.3.4.4.1 Service Information and Selection: receiving signals from multiple transmitters; multiplex handling; service handling; section 22.1.3 Dynamic switching of audio between stereo and surround sound; section 22.3.1.1 MHEG High Definition Graphics Model Broadcast Record Lists; section 22.4.4
Page 97 of 229
3. MHEG-5 Extensions
DTG D-Book 7 Part A v1 applications that can access content and services over IP. The following extensions allow applications of any type to be launched over IP from a running MHEG application. Note: Only application types supported by the device will be successfully launched.
3.1.1. ApplicationLaunch
Hands control of execution to another application of an arbitrary type. 3.1.1.1. Synopsis ApL( location, [name, value], success ) 3.1.1.2. Arguments
in/ out/ in-out input input input type GenericOctetString GenericOctetString GenericBoolean or GenericInteger or GenericOctetString GenericBoolean output
(Shall provide an IndirectReference to a BooleanVariable)
comment Location of the application to run List of name/value pairs to be passed to the application True if the application started successfully, false otherwise
success
3.1.1.3. Description Causes a new application to be started with the specified arguments. The application to run is specified by the location parameter. A side-effect of the resident program may be that the MHEG engine is stopped (killing the application). In all cases the state of the True Persistent Storage shall not be affected. The resident program takes a variable number of arguments. Zero or more name/value pairs may be present. If any name/value pairs are present, the name/value pairs are used to construct a data set of content type application/x-www-form-urlencoded as specified by section 17.13.4 of HTML 4.01 except that references to IETF RFC 1738 shall be taken as references to IETF RFC 3986, which updates it. This produces a data set of the form name1=value1&name2=value2, where each of the names and values has been percent-encoded after replacing any space characters with +.
Page 98 of 229
The data set may contain characters that are not represented in the US-ASCII character set; consequently the percent-encoding shall be carried out as specified for characters from the Universal Character Set in section 2.5 of IETF RFC 3986. Characters are assumed to be encoded as UTF-8. For example, the character Latin Capital Letter A With Grave is represented in UTF-8 by the octets 0xC380. In the text representation of MHEG, this character would be written as =C3=80; after percent-encoding this would become %C3%80. The data set is appended to the location, with a ? character as a separator; this forms a URI which references both the application to launch and the parameters to be passed to it. GenericOctetString arguments are treated directly as strings. GenericInteger arguments are converted to strings as decimal integers with no leading zeros. GenericBoolean arguments are converted to the string true if true and to false if false. In any case where an invalid set of arguments is supplied (such as a missing value argument) the resident program call shall fail in accordance with DTG D-Book 7 Part A v1 section 13.10.12. It shall not be possible to launch an MHEG application using this resident program. The location URI shall follow the rules for the use of reserved characters in HTTP URIs as defined in DTG D-Book 7 Part A v1 section 18.3.2.5.
3.2.1. GetLaunchArguments
Retrieves an argument set by another application of an arbitrary type. 3.2.1.1. Synopsis GLA( name, value ) 3.2.1.2. Arguments
in/ out/ in-out input type GenericOctetString GenericOctetString output
(Shall provide an IndirectReference to an OctetStringVariable)
name name
value
3.2.1.3. Description Retrieves the value of the named argument. Arguments can be set by applications of an arbitrary type, other than MHEG, when launching an MHEG application.
Page 99 of 229
The value of the argument is provided to the application as an OctetStringVariable. It is the responsibility of the application to convert this value into another type (using the SetVariable elementary action or one of the type conversion resident programs) if required. If the argument to be retrieved does not exist then the resident program succeeds and the value parameter is a zero length string.
ApplicationLaunchExtension(N)
ApE(N)
Chapter IX
IP Content Delivery
1. Chapter Summary
This chapter covers the ways in which content is acquired and consumed from the IP connection of the set top box. The platform is designed to support four key use cases for A/V media delivery: streaming of non-live content (video-on-demand) streaming of live content download of programmes to local storage close integration of A/V content and user interaction
This chapter does not cover hardware requirements and network interfaces (which are covered in Chapter II Consumer Device Platform) or the network stack requirements (covered in Chapter IV Consumer Device IP Networking) or the provisioning of the device with configuration information (which is addressed by Chapter VI Consumer Device Software Management).
Content Providers, who need to: Understand the way in which YouView devices handle IP-delivered content Understand the content formats that are supported.
ISPs, who may wish to understand the protocols and mechanisms used by YouView devices to access media content.
2. Streaming Protocols
2.1. Introduction
This section describes the streaming protocols that are used to fulfil the A/V delivery use cases described in section 1. Specific mechanisms for accessing A/V playback functionality from particular presentation engines is out of scope of this chapter.
2.2.2. Buffering
Devices shall provide buffering to cater for variation in the data rate achieved across the network and to manage the data needed by the media decoders. Two buffers are defined: a decoder feed buffer of a size necessary to ensure that the decoders are never stalled due to the normal variation in the bitrate of the encoded stream and due to any latency associated with reading data from the media buffer. a media buffer into which data received from the network is stored.
The decoder feed buffer shall be held in RAM. Devices shall have at least 20 Mbytes of RAM to allocate to decoder feed buffers for streams that are being decoded there may be more than one at any point in time. Streaming sessions that are paused do not require a decoder feed buffer until they are resumed. The required size of the decoder feed buffer for a particular stream is determined based on stream parameters. The media buffer may be in any form of storage. Devices shall have at least 2 GBytes of storage in total to allocate to the media buffers for streaming sessions there may be more than one at any point in time. An individual streaming session shall not consume more than 90% of the available media buffer space so that it is always possible to have at least two concurrent sessions. Devices shall retain the previous 30 seconds of media in the media buffer to allow for a rapid seek backwards (e.g. to recap a piece of missed dialogue or to see a particular event in a sports match again). Note: the minimum media buffer size specified for a single streaming session allows for around 3 hours of buffering ahead of the current play point at a typical SD stream bitrate. At lower stream bitrates, a greater amount of forward buffering is possible. A defined buffering model is specified to ensure that: the network infrastructure is not adversely impacted by large numbers of clients requesting lots of data at the same time; minimal amounts of data are transferred but not used (for example, if users decide not to continue watching the content after a short period of time). a fast fill mode in which data is buffered as fast as it can be delivered by the network; a managed fill mode in which data is buffered more slowly at a rate not exceeding 1.2x the consumption rate.
Devices shall use the fast fill mode when the amount of data buffered ahead of the current play point is less than a threshold value which is defined as the size of the decoder feed buffer plus an amount corresponding to 10 seconds of media. When the amount of data buffered is greater than this threshold, the managed fill mode shall be used. This threshold is subsequently referred to as bufferThreshold. The required buffering model is summarised in the following figure:
When reading data into the buffer, devices shall do so with a single HTTP request. Devices shall not make multiple concurrent connections, nor make multiple short requests for sections of the content. This is so as to minimise the overheads on the server. Where there is a requirement to download content out of order, for example to obtain stream metadata from a point other than at the start, HTTP Range requests may be used. All rate management of the kind described above must be done by the client application reading not more than the required rate of data from the TCP socket. This allows the TCP stack to manage the rate limiting.
If the buffer empties entirely and causes an interruption to playback, an underflow event shall be generated but playback must continue when data next becomes available. The algorithm described in Section 2.2.4 shall be used to determine when to resume playback. A higher level software component may use this event to consider whether there is a lower bitrate stream that could be played or to suggest the viewer download the content to watch later.
The choice between these methods is made according to Section 2.2.10.1. MP4 files can carry the necessary indexing information internally. Seek functionality shall be made available by the device where this information is present at the start of the file. 2.2.10.1. Selecting a mechanism for time to byte mappings The information needed to map from time to byte offset can be signalled through the use of the X-BytesPerSecond and X-IndexFileLocation headers in the HTTP response from the server or provided by an application. If the X-IndexFileLocation header is present the device shall retrieve the index file and use the Index File based timeline (see section 2.2.10.2). If there is a X-BytesPerSecond header, but no X-IndexFileLocation header then the device shall use the Known Bitrate timeline (see section 2.2.10.3). The Known Bitrate timeline shall also be used where an application has provided information about the true duration of the stream and the device is able to determine the content length in bytes. In all other cases, devices shall use the No Timeline method (see section 2.2.10.4).
The HTTP headers described here shall be observed whether they appear in a 302 response or in the response that delivers the content. 2.2.10.2. Index file timeline Where the index file timeline approach is used, the X-IndexFileLocation shall be set in the HTTP response. The value of this header shall be an http or https URL from which an index file as specified below may be obtained. The device shall retrieve the index file from the URL specified in the X-IndexFileLocation header. To allow efficient parsing by the device of the index file a binary format is used. The file consists of a series of samples. Each sample has a byte offset, as a 64 bit unsigned integer and a time in milliseconds, from the start of the file, as a 32 bit unsigned integer. The byte position should correspond to the start of a transport stream packet that represents a suitable random access point but this is not guaranteed. The last sample in the file contains the size of the file in bytes and the total duration of the media in milliseconds. All integers are encoded most significant byte first. File ::= NumOffsetSamples [OffsetSamples] LastSample NumOffsetSamples ::= uint32 OffsetSamples ::= [OffsetSamples] Sample Sample ::= BytePosition TimeValue BytePostion ::= uint64 TimeValue ::= uint32 LastSample ::= FileSize MediaDuration FileSize ::= BytePosition
MediaDuration ::= TimeValue NumOffsetSamples indicates how many Samples there are. It does not include the LastSample, which must always be present. Devices should store this file in a convenient structure to allow them to: determine current playback time from the current position within the file find the nearest sample to a time for seeking within the file
Devices shall support index files up to 256kbytes in size (containing up to 21 thousand samples). To determine the current position (tcurrent, in milliseconds) during playback, a device shall use the PTS value of the most recently presented video frame (PTScurrent) and a previous PTS value (PTSref) of a video frame for which the media time (media_timeref) is known, as follows: tcurrent = (PTScurrent PTSref) / 90 + media_timeref. If the stream does not contain video, the PTS values of audio access units shall be used instead. Note: the previous known reference point may be the start of the file, in which case media_timeref will be zero. To determine the byte offset to seek from a time value a device shall determine the closest sample to the time value in the index and use the byte offset associated with that sample. For seeking, devices shall not attempt to interpolate between samples, as the samples are intended to indicate random access points within the file where decoding can start. Where the seek position is between random access points, devices shall decode the media from the previous random access point but present decoded media only from the requested seek position. To determine the duration of the stream, the device shall use the MediaDuration entry in the LastSample of the index file. 2.2.10.3. Known bitrate timeline Where the known bitrate timeline approach is used, the X-BytesPerSecond header is present in an HTTP response or the application has provided sufficient information for the stream bitrate to be 6 calculated . The value of the X-BytesPerSecond header is a positive integer, which indicates the average bitrate of the file in bytes per second. To determine the current position during playback, devices shall use the same approach described in section 2.2.10.2. To determine the byte offset to seek to from a time value a device shall calculate the required offset as brequired = trequiredBytesPerSecond/1000 and then round this to the nearest transport packet (188 bytes). To determine the duration of the stream, the device shall obtain the size of the file in bytes from the Content-length: or Range: header from the server and calculate the time corresponding to the end of the file as duration = bytes1000/BytesPerSecond. 2.2.10.4. No timeline method Where no external means of mapping between time and byte position is available, devices shall use PCR and PTS information from the stream as follows. To determine the current playback position (tcurrent, in milliseconds) a device shall use the PTS value of the most recently presented video frame (PTScurrent) and the PTS value of the first video frame in the stream (PTSstart), as follows: tcurrent = (PTScurrent PTSstart) / 90. If the stream does not contain video, the PTS values of audio access units shall be used instead. To determine the duration of the stream, the device shall obtain the first and last PCR values from the stream (PCRstart and PCRend respectively) and calculate:
6
In this case, the bitrate figure is derived from the content duration and the information from the Content-Length HTTP header.
duration = (PCRend PCRstart) / 27000. To calculate the byte offset to seek to (brequired) from a time value (trequired), a device shall calculate the required offset as: brequired = trequiredbytes/duration. These calculations may require that the device download a portion of the stream from the start and/or the end in order to acquire the necessary PCR or PTS values. When reading a PCR from the end of the stream, devices shall request data 25 Kbytes at a time working backwards until a PCR is found. For streams with bitrates of less than 2 Mbps, one 25 Kbyte chunk should be sufficient to retrieve the final PCR. Devices shall download at most ten chunks; if no PCR is found in that amount of data, there is an error in the stream.
2.2.12. Cookies
The media playback client is not required to store persistent cookies. Session cookies shall be retained for at least the period of time over which a particular media asset is being obtained. In any event, cookies shall be discarded when the device next enters standby. The store of cookies for the media playback client shall not be shared with any other use of cookies on the device.
2.2.13. Caching
Devices are not required to cache any media content between progressive download sessions. Any caching that is performed by a device shall conform to the requirements of HTTP/1.1 and shall be transparent. Devices shall not use any cached data in place of content obtained from the server unless it can be confirmed as being up to date. For the avoidance of doubt, an attempt to access progressive download content shall always fail if the server cannot be reached at the time playback is attempted.
The algorithms for selecting the bitrate of media to download will need to change as the UK IP networks evolve and may also need to vary between content provider. The bitrate switching decisions will therefore be devolved to software that can be updated. The switching decisions will be influenced by the current buffer state as well as preferences indicated by the application. YouView intends to extend this specification to include Fragmented MP4 based content once the output of industry standards organisations working in this area, such as MPEG and the DTG is known. It is envisaged that the manifest and transport stream requirements currently specified will be compatible with any future version of this specification, however additional features may be supported if these are specified in other standards. These additional features are therefore not required for launch.
The constraints listed in section 3.2 of OIPF Release 2 Specification HTTP Adaptive Streaming shall apply. Additionally, the following limitations apply: The ProgramInformation element can be ignored. Any TrickMode element present can be ignored. Devices are not required to support more than one Period.
If the media segments are encrypted, the ContentProtection element will be present. For example, for media protected by Marlin would include a ContentProtection element with schemeIdUri="urn:dvb:casystemid:19188.
3. Content Download
Functionality described in this section is not required for launch.
3.1. Introduction
Devices shall support background downloading of content. Devices shall maintain a queue of content to download. There may be low priority items in the queue, which may only be downloaded during a configured background download window, and high priority items, which may be downloaded at any time. The handling of these different classes of download, as well as other IP resource management constraints, is defined in Chapter IV Consumer Device IP Networking.
4. Content Formats
4.1. Codecs
IP-delivered A/V content may use any of the codecs specified in Chapter II Consumer Device Platform. It is expected that the majority of streamed content will use H.264 for video and HE-AAC for audio. Devices shall observe aspect ratio and AFD signalling when presenting streams delivered over IP.
4.2.2. Encryption
Transport streams may be encrypted using AES with a 128-bit key using the Cipher Block Chaining (CBC) encryption mode with the residual termination block process as specified in IEC 62455 section 6.4.6. Devices shall support decryption of an AES-encrypted transport stream at bitrates up to at least 10 Mbps. Further detail on encryption formats, including details of how encrypted media links to specific content protection mechanisms via IEC 62455 Key Stream Messages and PMT descriptors is defined in Chapter X Content Protection : AV Over IP. In the case of transport streams segmented for adaptive bitrate streaming, devices may assume that a stream reconstructed from a valid sequence of segments will meet the same requirements for decryption as a conventional non-segmented stream. Note: this may require the content to be created with suitable alignment of the traffic keys in the constituent adaptive bitrate representations.
MP4 files conforming to this specification shall have the major brand 'avc1', indicating MPEG-4 part 10 (H.264) video content. Devices are not required to support MP4 files containing other video codecs. MP4 files contain a number of 'boxes'. The following table lists boxes relevant to this chapter, either because they must be parsed by the device, or because they are mandatory in the base specification. This table forms part of the normative specification.
Box type 'ftyp' 'moov' Name File Type Box Movie box Usage Indicates file format. Contains header information for the media in the file. SHALL support v0. MAY support v1. Identifies a track within the file. SHALL support v0. MAY support v1. Only version 0 box SHALL be used. Device requirements Shall recognise 'avc1' brand. Restrictions for content production Shall mark MP4 files with 'avc1' as the major brand. For constraints for progressive download, see section 4.3.1 Only version 0 box SHALL be used.
'mvhd' 'trak' 'tkhd' 'mdia' 'mdhd' 'minf' 'hdlr' 'vmhd' 'smhd' 'dinf'
Movie header box Track box Track header box Media box Media header box Media information box Handler reference box Video media header box Sound media header box Data information box Specifies the location of the media samples.
MAY be ignored as restrictions in this specification prevent external location of media. MAY be ignored as restrictions in this specification prevent external location of media.
'url '
Specifies the location of the media samples using a URL or indicates they are within the current file.
Flags on this box SHALL be set to 0x01 and the url string empty, indicating the track is within the current file.
Sample table box Decoding time to sample box Composition time to sample box Sample description table
'stsd'
Name Sample size box Sample size box (compact version) Sample to chunk box Chunk offset box
Usage
Device requirements
Where possible this box should be used in preference to 'stsz' to save space. This is likely to be the case for audio tracks.
'stsc' 'stco'
Indicates the location of chunks of samples within the file using a byte offset from the beginning of the file. Indicates the location of chunks of samples within the file using a byte offset from the beginning of the file.
'co64'
This box SHALL be used when a non-fragmented file exceeds 4GB in size. It SHOULD NOT be used otherwise due to the space overhead introduced.
'stss'
Where present the device SHALL use the information from this box to assist in seek operations. However if the box is empty or missing seek SHOULD still be performed, but decoding after the seek may take a number of frames to resume.
'avc1' 'avcC'
AVC sample entry box AVC configuration entry box MPEG4 bitrate box AVC parameter sample entry box Sample to group box Sample group description box Movie extends box Movie extends header box Indicates that the file contains fragments. Gives the total duration of a fragmented file. SHALL support v0. MAY support v1. This box SHALL be present and indicate the total duration if the mvex box is present in a file. Only version 0 box SHALL be used.
'btrt' 'avcp'
'mehd'*
Usage Contains header information for a fragment. Contains header information for the specified track within the fragment.
Device requirements
Restrictions for content production This box SHALL NOT exceed 1MB in size.
'traf'*
'tfhd'* 'trun'*
Track fragment header box Track fragment run box Movie fragment random access box Track fragment random access box Contains sample duration and sizes for the track. Contains a list of random access points in a fragmented file. Contains the random access points for the specified track. SHALL support v0 and v1 boxes. This is to allow support for files over 4GB, which need 64 bit offsets. This box SHALL be the last box in all fragmented files.
'mfra'*
'tfra'*
Where fragments contain more than one randomly accessible sample, this box SHALL refer to at least the first randomly accessible sample in each fragment. The media SHOULD use a v0 box where the file size is less than 4GB. This box SHALL be present and SHALL be the last box within the mfra box.
'mfro'*
Helps a device obtain the 'mfra' box when this is placed at the end of the file. Table 27 : MP4 boxes
Boxes marked with an asterisk (*) in the table above are only used in MP4 files containing fragments. MP4 files shall be constructed with the 'moov' box after the ftyp box at the start of the file and before any sample data. Media data for the tracks used by the MP4 file shall be contained within the file itself. Devices are not required to support references to tracks in external files. Edit lists shall not be used to alter playback. If an 'edts' box is present it shall be empty. Note that some boxes have two versions 0 and 1, and in many of these cases the v1 is to allow 64-bit timestamps to be used. The Timescale value set in the 'mvhd' shall be chosen appropriately by the content author to prevent the need to use 64-bit timestamps, and as such, v1 boxes shall not be used where indicated in the table. Any boxes not listed here may be ignored by the device. Content providers shall not insert boxes not listed here if a device ignoring those boxes would materially affect the presentation of the media to the viewer. Note that 'mdat', 'free' and 'skip' boxes are not listed above as they are technically not parsed or interpreted by the device.
4.3.1.
Progressive download
When content is intended for consumption through progressive download then there are additional requirements on the formatting of the media. These are to avoid lengthy delays at startup and to enable to device to seek efficiently within the file.
Additionally files for progressive download should use fragmentation where necessarily to ensure that the following recommended size limits are not exceeded: 'moov' box total size shall not exceed 4MB. Ideally it should not exceed 1MB. This is to avoid excessive delays at startup while the 'moov' box loads. 'moof' boxes shall not individually exceed 1MB. This is to avoid the device running out of media to play while downloading a 'moof'. Fragments (that is 'moof' plus 'mdat') shall not exceed 4GB. This avoids the need for 64 bit offsets to be used.
Within the media data samples shall be interleaved such that no more than one second of data is required in the devices buffer for the current samples for all streams in the file to be available, as illustrated for a file containing one audio and one video track in the following figure:
moov Audio Video Audio Video Audio Video max 1s max 1s max 1s max 1s max 1s max 1s
Devices shall play files which do not meet this recommendation for limits on 'moov' and 'moof' sizes, the viewer experience may be impaired through lengthy startup times and possible pauses.
4.3.4. Encryption
MP4 content may be encrypted using AES with a 128-bit key. Devices shall support decryption of downloaded or progressively-downloaded MP4 content encrypted according to the Continuous Media Profile (PDCF) defined in section 7 of the OMA DRM Content Format specification version 2.1. Other encrypted MP4 formats, including an encryption mechanism for MP4 content that is part of an adaptive bitrate stream may be added in future.
This format allows subtitles to be used with any media format and also allows the same subtitle file to be used for other platforms (for example, delivery to PC clients).
4.4.1. Profile
Devices shall support at least the following features of the TTML specification:
#backgroundColor #cellResolution #color #content #core #extent #fontFamily #fontSize #fontStyle #fontWeight #layout #length-cell #length-em #length-integer #length-percentage #length-pixel #length-positive #length-real #lineHeight #nested-div #nested-span #origin #padding #presentation #showBackground #structure #styling #styling-chained #styling-inheritance-content #styling-inheritance-region #styling-inline #styling-nested #styling-referential #textAlign #textDecoration-under #timeBase-media #time-clock #time-clock-with-frames #time-offset #time-offset-with-frames #timing #writingMode-horizontal-lr
Note: in TTML, a requirement for a principle feature implies support for any subset features. For example support for #fontFamily implies #fontFamily-generic and #fontFamily-non-generic.
4.4.2. Format
TTML subtitles will be provided as one XML file per selectable programme. The device shall support UTF-8 encoding for XML files.
4.4.3. Acquisition
The device shall download the necessary file specified by the application using an HTTP request. TLS shall be supported.
4.4.4. Rendering
4.4.4.1. Resolution The device shall render subtitles to a graphics plane with a resolution of at least 1280x720, irrespective of the media resolution. 4.4.4.2. Anti-aliasing The device shall provide rendering support for at least 8 colour tones per principle colour. 4.4.4.3. Timing Times within the subtitle file shall be interpreted relative to the start of the media asset being presented.
Subtitles shall be rendered within 0.04 seconds of the specified time. 4.4.4.4. Background colour fill When rendering a span with a tts:backgroundColor, the filled area shall cover the full line height as specified by the tts:lineHeight attribute. Where text with tts:backgroundColor applied at the span level, the filled area shall extend the width of a space character to the left of the first rendered character on each line and to the right of the last rendered character on each line.
4.4.6. Layer
Subtitles shall be rendered either through the screen manager and DirectFB or on a separate 8bpp graphics plane, below main graphics plane as described in Chapter II Consumer Device Platform.
This defines a region that is 14 text lines (cells) high, and starts one cell into the screen. The displayAlign property ensures that all subtitles lines are vertically aligned to the bottom of the region by default.
4.4.8.3. Font tts:fontFamily: Tiresias Where the platform does not support the font specified, it shall default to Tiresias. sansSerif and default shall always be mapped to Tiresias. Other font support is implementation dependent. tts:fontSize: The default font size shall be 0.9c. This corresponds to a height of 41px at a 1280x720 resolution.
4.4.8.4. Paragraph tts:color: white tts:lineHeight: If no lineHeight is specified, it shall be set to 1c so that the background boxes of adjacent lines touch without the need for padding.
Chapter X
Content Protection : AV Over IP
1. Chapter Summary
This chapter defines the way in which rights-sensitive A/V content can be protected and managed both in distribution and whilst it exists in the consumer device. The chapter defines a range of content protection mechanisms that will give content providers choice about what they employ; i.e. to only need to use something that is sufficient given the nature of the content being delivered.
Content Providers, who need to: Understand the security and utility provided by each of the content protection mechanisms Configure back-end systems to interface to the content protection mechanisms Encrypt content according to the support encryption formats
2. Introduction
This chapter provides detailed specifications for a range of content protection options. The options vary both in complexity for the content provider and the level of protection needed. Content providers may use any of the supported options according to their needs. Section 3 describes a set of content protection options to be provided by the core media playback function of the device. In addition, specific presentation environments may provide additional mechanisms for content protection. The table below summarises the features and capabilities of the different content protection options.
Offline playback of downloads Configurable output controls Authentication
Hardware decryption
Protection mechanism 3.1 No protection (Simple HTTP) 3.2 Simple device authentication 3.3 Device authentication using MS3 3.4 Device authentication and transport encryption (TLS streaming) 3.5 Device authentication and encrypted content delivery (Marlin MS3) 3.6 DRM protected content (Marlin Broadband) N Y Y Y
N N N Y
N N N N
Y Y Y N
N N Y N
N N N N
Y Y Y Y
A number of sequence diagrams are shown in this chapter. These provide a simplified view of the interactions that take place with each mechanism. To illustrate the interactions between applications and the underlying media playback functions of the device, a component labelled MediaRouter is shown to which represents the applications interface onto the media playback implementation.
Streaming
Figure 18
Note: Interactions between the MediaRouter and Media decoder objects shown are illustrative only. This content delivery mechanism does not provide means to change the state of output control mechanisms from the default specified in section 3.9.
Note: One-time URLs cannot be used with this mechanism because the URL may be used more than once if the player needs to seek within the stream.
Figure 19
Figure 20
Note: Interactions between the MediaRouter and Media decoder objects shown are illustrative only.
2. The server responds with a Stream Access Statement (SAS) which can include an authenticator element to be substituted into the media URL and which also includes a set of output control requirements to be met whilst the content is being played. The implementation forms a URL for the content (the C-URL) using the C-URIT and the authenticator. The content is streamed using the C-URL as if it had been provided directly by the application. For the period that the content is being presented, the output control mechanisms shall be configured to satisfy the output control requirements provided by the SAS within the context of section 3.9. The SAS and the buffered data shall be retained until the application ends the streaming session or selects a different source. If the device cannot retrieve or process the SAS, an event shall be raised. Where the SAS contains an authenticator, it is possible that this authenticator may expire after a period of time. If, after the initial request for the content, the device needs to make a new request (for example to perform a seek, to resume after pausing or to re-try after a transient error condition), the implementation shall first make one attempt to use the existing C-URL for the new request. If this fails, the implementation shall return to the MS3 service for a new SAS.
Figure 21
Note: Interactions between the MediaRouter and Media decoder objects shown are illustrative only.
No robustness rules apply to the handling of session keys or decrypted content with this scheme. As such, this mechanism aims to provide secure delivery of content into the device but provides limited protection against a determined attacker able to tamper with the device.
Figure 22
Note: Interactions between the MediaRouter and Media decoder objects shown are illustrative only.
1. The implementation first makes a secure connection to the MS3 service with client authentication using. Server authentication shall be performed using the platform-specified bundle of HTTPS root certificates. 2. The server responds with a Stream Access Statement (SAS) which can include an authenticator element to be substituted into the media URL and which also includes a set of output controls to be applied whilst the content is being played. 3. The implementation forms a URL for the content (the C-URL) using the C-URIT and the authenticator. 4. The content is streamed using the C-URL as if it had been provided directly by the application. 5. Before the content is decoded, the media decryption function is configured to decrypt the content using the keys obtained from the SAS. 6. For the period that the content is being presented, the output control mechanisms shall be configured to satisfy the output control requirements provided by the SAS within the context of section 3.9. Note: Steps 3 and 4 can proceed in parallel with steps 1 and 2 if C-URIT does not require an authenticator. This will result in improved start up times. The SAS and the buffered data shall be retained until the application ends the streaming session or selects a different source. If the device cannot retrieve or process the SAS or at any point during presentation of content, a content identifier is encountered for which a content key is not available, an event shall be raised. Where the SAS contains an authenticator, it is possible that this authenticator may expire after a period of time. If, after the initial request for the content, the device needs to make a new request (for example to perform a seek, to resume after pausing or to re-try after a transient error condition), the implementation shall first make one attempt to use the existing C-URL for the new request. If this fails, the implementation shall return to the MS3 service for a new SAS. The C-URL may reference any supported streaming protocol specified in Chapter IX IP Content Delivery, including multicast delivery. Devices shall provide hardware-accelerated decryption for content delivered in this manner. Bitrates up to the maximum required for unencrypted content shall be supported.
The DRM system can also be used for streaming use cases if desired. Two modes of licence acquisition are supported: 1. Licence acquisition prior to playback, initiated as a result of interactions between an application running on the device and an external web store. The location of the web store may be known to the application (in the case of a VOD store app) or may be obtained from information held in the metadata system (in the case of IP linear channels). 2. Licence acquisition on demand, triggered by an attempt to play protected content for which the device does not already have a valid licence.
For the period that the DRM-protected content is being presented, the output control mechanisms shall be configured to satisfy the output control requirements specified in the DRM licence within the context of section 3.9. Implementations shall comply with the Marlin Broadband Delivery System Specification v1.2. Implementations shall provide a Full Implementation as defined by the Marlin Broadband Network Service (BNS) specification v1.2 with support for BNS Extended Topologies. Devices shall indicate their support for BNS Profile and BNS Extended Topologies in the manner described in section 6 of the BNS specification.
There are three components: Player App. This is the user-facing application through which the user selects content to watch or download and through which payments are made. This will generally be a service specific application but could be the UI in the case of IP linear channels. DRM. This represents the system component that encapsulates the licence acquisition functions of the DRM system, providing a high level interface to the application. It stores and manages licences. MediaRouter. This represents the interface onto the components that play back media content.
Firstly, the application performs a transaction with the web store and obtains an action token. This is passed through an interfaced labelled Drm where the DRM client acquires a licence from the licence server. After this process is complete, the application may optionally make an advance check that the DRM system now has a licence that will permit playback. When playback is requested, the implementation underlying the MediaRouter and Drm interfaces shown obtains the information required and begins decoding the media.
Figure 25
Note: Interactions between the MediaRouter and Media decoder objects, and between the Drm object and the MediaRouter are illustrative only. These interactions are implementation dependent.
Devices shall check all stored licences at least once every 24 hours against the following conditions. If neither condition 1 nor condition 2 applies to a licence, it shall be retained, otherwise it shall be deleted.
1. The licence has a urn:marlin:core:node:attribute:expiration-date attribute that represents a date in the past, or 2. The licence satisfies all of the following conditions: a. it is not required by any content stored on the device, and b. it has a urn:marlin:core:node:attribute:expiration-date attribute that is more than 30 days in the future or has no such attribute, and c. it was acquired more than 30 days ago
Note: This ensures that where a licence relates to content that is stored on the device, the licence will not be discarded until the content is deleted or the licence expires. It is implementation dependent whether licences associated with content stored on the device are stored with the media or in a separate database. Implementations must take account of the following scenarios: Downloads may reference multiple content IDs (e.g. for different tracks) but would normally have a single licence covering them. A recording may reference multiple content IDs and multiple licences (e.g. for different temporal parts of the recording). Multiple recordings may require the same licence or licences (e.g. if they were made from the same IP channel on the same day).
Note: This means that to store licences with the content, licences may have to be copied as they may relate to multiple assets stored on the device. Where content formats allow DRM licences to be embedded and delivered to the device within the media, devices shall search any such licences first, prior to searching elsewhere to find a valid licence for a requested operation. Devices shall check stored nodes and links at least once every 24 hours and discard any that have a urn:marlin:core:node:attribute:expiration-date attribute that represents a date in the past.
3.7. Personalisation
Marlin clients must be personalised in order to interact with Marlin licence servers and to handle Marlin licences. Devices shall implement a mechanism that ensures that the device is personalised before any third party application runs for the first time. Any requirement to access a remote server to perform this task shall be included as part of the devices software upgrade procedure.
Applications
Adaptation
Control
Output controls Demux & decrypt Media streaming Secure key box TLS auth for 3.4 Buffering MP4 TS Decode Display
Marlin client
MS3
Display hardware
The set of output controls in force at any point in time shall be sufficient to meet the requirements of all A/V presentation sessions that are active. The set of output controls shall be evaluated each time presentation of content begins or ends. As specified in DTG D-Book 7 Part A v1, HDCP shall be applied at all times unless the user has specifically configured the device to apply HDCP only where explicitly signalled. Note: This is due to the long periods of black picture that result when the HDCP state is changed. For all other output control mechanisms, the mechanism shall not be applied unless it is explicitly required for content that is currently being presented. Where a content licence imposes constraints on analogue outputs that the device does not support, the analogue outputs shall be disabled. When disabling the analogue video outputs, devices shall act as follows: If a digital video output is active, the analogue output shall be disabled with no other feedback provided.
If no digital display device is active, the video shall not be presented on the video plane and an on-screen dialogue shall be presented to the viewer.
Note: Chapter II Consumer Device Platform requires that the device has no analogue HD outputs.
The following tables indicate how these parameters map to the outputs required on YouView devices. This section is informative. In the event that the mappings defined here conflict with the requirements of the Marlin Compliance Rules, those rules take precedence. The following table applies to video or audio-video content:
Output HDMI Output control mapping HDCP ON if (CCI != 0 || EPN == 0) HDCP may be disabled in other cases if the user has specifically configured the device to disable HDCP where not required. Output disabled if (APS == 0 && DOT == 0) CGMS-A bits defined in EN 300 294 to be set according to the CCI value as follows: CCI Copy control not asserted (00) No more copy (01) Copy one generation (10) Never copy (11) SPDIF bit 12 0 0 1 1 bit 13 0 1 0 1
Analogue SD video
In PCM mode, output restricted to maximum 2-channel, 16-bit, 48 kHz for all Marlin protected content. In bitstream mode, no restriction. SCMS set according to CCI value as follows: CCI Copy control not asserted (00) No more copy (01) Copy one generation (10) Never copy (11) SCMS Copy allowed Copy prohibited Copy once Copy prohibited
Analogue audio
3.10.2. MP4
YouView shareholder companies are involved in industry standardisation activities including DTG, OIPF, 3GPP, DECE and MPEG and take the work of these groups into account when deciding on content formats. These standardisation activities still have some way to go in agreeing a common approach to MP4 encryption, particularly where support for both progressive download and adaptive bitrate streaming is required. This specification specifies an encrypted MP4 format for progressive download only. YouView will address the requirement for an encrypted, fragmented MP4 file format at a later date. Devices shall support MP4 content encrypted using the OMA PDCF format.
Encrypted streams intended for use with MS3 (see section 3.5) or Marlin Broadband (see section 3.6) shall comply with the requirements of the Marlin Specification version 1.0.3. Devices are not required to support Silent Rights or Preview Rights URL signalling. Devices shall support embedded licences.
Chapter XI
Content Acquisition And Management
1. Chapter Summary
This chapter specifies the support for content acquisition and management to be built into a YouView device, and introduces the logical components : Linear Acquisition, Acquisition Manager, and Local Media Library. Section 2 specifies the consumer device content acquisition and management architecture. This has been designed so as to allow the content acquisition functions in the consumer device to be as autonomous as possible due to their mission-critical nature. This will also enable maximum re-use of existing, tried-and-tested code for both programme acquisition from linear channels (as found in existing DVR products) and media asset download over IP. Section 3 specifies the support for programme acquisition from linear channels (e.g. DVR enabling functionality) to be provided by the Linear Acquisition component. This includes support for series bookings and the use of alternative instances when a recording attempt fails. Section 7, specifies the support for background acquisition of programmes from linear channels, which can be used to reduce (or mitigate) the level of IP traffic by pre-acquiring programmes that are likely to be viewed, either by the audience as a whole, or in a more advanced exploitation by the specific household in which the device is installed. Section 4 specifies the support for download over IP to be provided by the IP Downloader component. In a non-IP-connected DVR device the programme acquisition from broadcast function can operate on the assumption that it controls the only way to acquire content. Hence all of the decision-making logic (or business rules) can be restricted to this function alone and internalised. The same is true of a media asset download over IP function in an IP-only device. However, when the two acquisition functions are both present the business rules need to span both functions. For example, if the DVR fails to acquire a particular broadcast instance it may be better to initiate an IP download than wait to acquire a repeat transmission several days later. The Acquisition Manager component encapsulates business rules that cannot be internalised to the individual components providing acquisition capability. This is specified in section 5. Section 6, specifies the support for managing and accessing acquired content to be provided by the Local Media Library.
2. Introduction
2.1. The conceptual model
Figure 27 shows the system components involved in the discovery, acquisition and consumption of programme content on the device. This chapter describes the functionality of the components in the acquisition subsystem and the relationships between them.
Application
Acquisition Manager
PushVOD record list retrieval Business rules Download booking interface
Broadcast metadata
IP metadata
Linear Acquisition
IP Downloader
Media Router
Acquisition
Consumption
Note 1: To simplify Figure 27 some links have been omitted. Note 2: In addition to the Platform Main UI, other device services may also interact with components of the acquisition subsystem.
2.2.2. IP Downloader
This component handles the download of content over IP.
The Linear Acquisition component shall be able to take control of tuners as necessary to meet its acquisition obligations. Other software components that need access to tuners shall work around the behaviour of the Linear Acquisition component and under some circumstances are required to make use of tuner resources controlled by the Linear Acquisition component.
Scheduled Recording
Figure 28 shows how the different types of Booking will be represented in the logical model defined at the start of section 3.2. Event Bookings will have a single Scheduled Recording which in turn is associated with a schedule event. Programme Bookings will have up to one Scheduled Recording associated with them at any given time. The Scheduled Recording will be created when an event with the appropriate programme CRID has been located in the schedule. Series Bookings will have zero or more Scheduled Recordings associated with them. The number of Scheduled Recordings will be dependent on the number of events with the Series CRID (from the Booking) in the current linear schedule. Timer Bookings will have a single Scheduled Recording which in turn is associated with a specific time window on a particular linear service.
:Event Booking :Programme Booking :Series Booking :Timer Booking
-Booking Reference
0..1
-Booking Reference
0..*
-Booking Reference
-Booking Reference
:Scheduled Recording
:Scheduled Recording
:Scheduled Recording
:Scheduled Recording
1 1
-Event Locator
1 1
-Event Locator
1 1
-Event Locator
1 1
-Timer Reference
:Schedule Event
:Schedule Event
:Schedule Event
:Timer
The Platform Main UI application i.e. user Bookings A Record List function
The virtual Scheduled Recording shall have priority equal to the highest priority Scheduled Recording with that particular event locator.
The types of Booking that each of these functions can make is restricted as shown in Table 32.
Type Programme Booking Event Booking Series Booking Timer Booking Viewer Table 32: Booking types mapped to Booking sources D-Book Broadcast Record List Platform managed Push-VOD
The priority of the Booking also affects how any associated Scheduled Recordings are managed alongside live or time-shifted consumption of linear TV/radio (see section 3.5.1).
viewer, e.g. presenting a suitable on-screen dialogue to allow them to resolve the conflict. The exact nature of this dialogue and how it is achieved is outside the scope of this specification. The prediction shall be based on the linear schedule available at the time of booking. Note: A Booking Conflict will only occur when making a Priority 1 Event Booking. 3.3.2.2. Priority 2 11 Event bookings The Linear Acquisition component shall always accept lower priority (2 11) Bookings regardless of any potential conflicts, instead resolving any conflict at acquisition time.
Booking. This ensures that repeat showings of a programme are not acquired again even when a device has already completely acquired the programme and the viewer has deleted the file from the Local Media Library. The list shall be deleted when the Series Booking is deleted. The Linear Acquisition component shall search for any Events that have a Series CRID that matches the Booking. When an Event is found the Events Programme CRID (if present) shall be checked against the list of Programme CRIDs already acquired for this Booking: If the CRID is not present in the list, the automatic update shall find the soonest Event with that Programme CRID that can be scheduled for acquisition without causing a predicted conflict. If the CRID is in the list no Scheduled Recordings shall be created.
Priority 1 recordings Live or time-shifted linear consumption (TV/Radio) Priority 2-11 recordings
This means that the Linear Acquisition component is able to take control of tuners as necessary to meets its acquisition obligations with respect to Priority 1 Scheduled Recordings, even if this is at the expense of live TV viewing. Priority 2-11 Scheduled Recordings shall only be acquired where they do not disrupt the control of tuners for Priority 1 Scheduled Recordings or live/time-shifted linear consumption. Acquisition of low priority Scheduled Recordings shall take place if possible even if some higher priority Scheduled Recordings are not possible.
Yes
Is there an in-progress recording with a lower priority than the one due to start?
No
Stop the recording with the lowest priority and start the new recording
Do not start new recording re-evaluate when a current inprogress recording finishes
3.6. Acquisition
3.6.1. Commencing acquisition
Acquisition of linear content shall be controlled by changes in the EIT present/following information as described in the DTG D-Book 7 Part A v1. Support for timer based acquisition is not required for launch. For timer based acquisition it is expected that the acquisition start time will be the scheduled start time of the event and the acquisition end time will be the scheduled end time of the event e.g. to support acquisition from IP channels.
audio, in accordance with viewer preferences at the time of acquisition subtitles (if present) see section 3.6.2.1 for additional requirements audio description (if present)
3.6.2.1. Subtitle timing signature Whilst making any recording from a linear channel, if the service contains a subtitle component, the device shall parse the subtitle data to extract the timings of significant subtitle events as follows: For each subtitle PES packet, the device shall determine whether the packet contains the beginning of a new subtitle, an update of a subtitle on screen, or the end of a subtitle by identifying how many regions are listed in the page composition segment, which of these contain subtitle objects, and whether there is a version change for the page or for any region. If there was no subtitle present previously, the presence of one or more regions containing one or more objects implies a subtitle start. If there was a subtitle present previously, a page with no regions or a page with regions that are all empty of objects implies the end of that subtitle. If a previous subtitle has not ended, there is a subtitle change if the page or any region that is part of the page changes version.
The device does not need to parse any further structures within the subtitle PES packet and does not need to decode any subtitle bitmaps. For each start, change and end point detected, the device shall record the PTS value of the corresponding subtitle PES packet and store this information in the Local Media Library. The PTS value of the first video frame shall also be recorded. The device shall provide a mechanism by which the timing data can be accessed by applications. The information shall be made available in the following format: { startTime: 123456789, subtitles: [ { type: start, time: 123456789 }, { type: end, time: 123456789 }, { type: start, time: 123456789 }, { type: update, time: 123456789 }, { type: end, time: 123456789 } ] } Where 123456789 is replaced by the relevant PTS value in units of 1/90,000 sec. PTS of first video frame
Note:
The actual duration is defined as the time between the event entering the present slot of EIT p/f and subsequently leaving it. If the viewer initiates a recording after an event has entered p/f present the actual duration shall be calculated from the point at which recording began. For timer records the actual duration shall be taken as the time between the scheduled start time of the recording and the scheduled end time.
When an acquisition is complete, any associated Scheduled Recordings in the logical representation defined in section 3.2 shall be deleted. If a Scheduled Recording is part of a Programme Booking the parent Booking shall also be deleted. If a Scheduled Recording is part of a Series or Collection Booking, the Booking shall not be deleted and the Programme CRID (if any) shall be added to the list of CRIDs already acquired for that Booking.
Note:
The actual duration is defined as the time between the event entering the present slot of EIT p/f and subsequently leaving it. If the viewer initiates a recording after an event has entered p/f present the actual duration shall be calculated from the point at which recording began. For timer records the actual duration shall be taken as the time between the scheduled start time of the recording and the scheduled end time.
When an acquisition fails, any associated Scheduled Recordings in the logical representation defined in section 3.2 shall be deleted. The Local Media Library shall delete any acquired content associated with the acquisition but retain the stored metadata (see section 6.4.3). If no content has been acquired the Local Media Library shall update its estimate of storage requirements (see section 6.2) and retain the stored metadata. 3.7.2.1. Priority level 1 Programme or Event Bookings If a Scheduled Recording linked to a Programme or Event Booking fails, the Linear Acquisition component shall search the current EITschedule to find an alternative instance that can be acquired. The search shall be based on the Programme CRID associated with the Schedule Event in the EIT p/f information. If an alternative instance is found, a new Scheduled Recording shall be created (see section 3.4.4) and linked to the original Booking. The Scheduled Recording associated with the failed acquisition shall be deleted. If no alternative instance can be found the Scheduled Recording associated with the failed acquisition shall be deleted as well as the associated Booking. If an alternative instance is found and the subsequent acquisition attempt results in a partial or complete acquisition, the stored metadata for the old (failed) acquisition shall be discarded from the Local Media Library. 3.7.2.2. Priority level 2 11 Event or Programme Bookings The Linear Acquisition component shall not look for alternative instances to acquire failed lower priority Bookings. In case of failure, both the Booking and the Scheduled Recording are to be deleted. 3.7.2.3. Series Bookings If a Scheduled Recording linked to a Series Booking fails, the associated Scheduled Recording shall be deleted. If an alternative instance is available for the failed recording, a Scheduled Recording may be created as a result of the automatic update described in section 3.4.3. 3.7.2.4. Timer based Bookings The Linear Acquisition component shall not look for alternative instances to acquire failed Timer Bookings. In case of failure, both the Booking and the Scheduled Recording are to be deleted.
4. IP Downloader
The IP Downloader shall support the download mechanisms defined in Chapter IX IP Content Delivery. It shall also support the resource management functionality defined in this chapter.
5.2. Architecture
The business logic implemented by the Acquisition Manager will need to evolve, modifying the acquisition behaviour of the platform over time. To achieve this, the Acquisition Manager will need to be configurable.
Platform Main UI
Download booking
Acquisition Manager
Push-VOD for IP mitigation logic Logic to maximise acquisition opportunities (business rules)
Recording booking Acquisition events (download)
Linear Acquisition
Recorded programmes
File events
IP Downloader
Downloaded programmes
Anticipated (downloads)
Note: Local storage allocated to Push-VOD (Priority 7 11) acquisitions shall not be included in the values above.
Download
6.3.1. Recordings
This descriptive metadata shall be obtained from broadcast. The Local Media Library shall acquire the necessary metadata when the Linear Acquisition component creates a Scheduled Recording (see section 3.4.4).
6.3.2. IP Downloads
The Local Media Library may acquire the necessary metadata before acquisition begins but it must update the metadata at the time of acquisition.
When responding to such queries the Local Media Library shall not include the contents of the PushVOD reservation (see section 7.2) or entries describing pending acquisitions, but it shall include entries for failed acquisitions. The Local Media Library shall provide means by which an Application can check whether a specific programme is present in local storage by providing an appropriate identifier e.g. a Programme CRID. The Local Media Library shall take into account all locally stored content (including any in the PushVOD reservation) as part of this check.
Chapter XII
Application Discovery And Installation
1. Chapter Summary
This chapter describes how downloadable Applications (which will be referred to as Content Provider Applications) are discovered, installed and managed on the consumer device, and introduces the logical components Local Application Library, . Other Applications may be present on the device provided as part of the Platform Software. This is not covered by this chapter.
2. Overview
The Platform Main UI allows the viewer to discover a range of Content Provider Applications that can be installed onto the device. Figure 31 shows three different ways in which Applications could be discovered by the viewer.
Platform Main UI Browse on-demand Eastenders Hollyoaks Coronation Street Platform Main UI Application Gallery App A App D App B App E App C App F TV (BBC One) RED
Application Monitor
App Z
Launch(ITVplayer)
Launch(AppZ)
ITVplayer
App Z
BBCRedButtonApp
App Z
Content Provider
App provider back-end services
Device
App. UIME
Launch
Platform Main UI
Application hosting
B2B
Application catalogue
MAS
Viewer discovery of Applications within the Platform Main UI is based on metadata provided by the Metadata Aggregation System (MAS). Applications themselves are not hosted by the Platform Operator. Instead they are hosted by the Content Provider and may be downloaded over the Application Download Interface (ADI). This allows Content Providers to control which version of an Application is available at a given time. At the heart of the environment is the Local Application Library. The LAL offers four main functions: Installing Applications, including download and verification (see section 4). Allowing other software components to query the installed Applications as part of Application launch (see section 5). Management and update of installed Applications (see section 6). Uninstalling Applications (see section 7).
In order to provide these functions the LAL includes Download and Verification subcomponents. The LAL also stores descriptive metadata for installed Applications for use by the Platform Main UI. The User Interface Management Engine (UIME) is responsible for managing the life-cycle of launched Applications (see Chapter XIII Consumer Device UI Management).
Application
1 n
The top level Application entity represents the abstract application concept and contains descriptive information such as title, description and any parental guidance. A Device-specific Application Build refers to a build of an Application for a specific presentation engine. An Application entity may have multiple Device-specific Application Builds. Finally, a Version of a Device-specific Application Build describes a particular version of a Devicespecific Application Build. This level of the model corresponds to an Application that can be installed.
Note:
Though there will be multiple versions of a Device-specific Application Build, there will only be one available for download to the device at any given time.
4.3. Storage
The device shall provide a means to store the following: Installed Applications. Applications metadata. Applications private data.
This allocation shall be separate from space used for AV media storage and is defined in Chapter VII Consumer Device Storage Management.
If the Download subcomponent is unable to access the file by following the supplied URL, the LAL shall check that the URL is still correct by updating the Application metadata to the latest version available. If the URL has changed the LAL shall inform the Download subcomponent which shall then run through the update check process defined above, using the new URL. If the LAL discovers that a new version of an Application is available it will update the metadata stored with the Application to indicate that an update is available. This can be used by the Platform Main UI Application when presenting the list of installed Applications, to alert the viewer to the fact that the installed version of a particular Application is out of date.
7. Uninstalling Applications
Installed Applications can only be uninstalled by a request to the LAL. Requests to uninstall an installed Application will be made by the Platform Main UI in response to: An explicit request by the viewer to remove a Viewer-managed Application. A decision by the Platform to remove a Platform-managed Application.
7.1. Storage
When the Platform Main UI requests that an Application be uninstalled the LAL is responsible for ensuring that the following is removed: The installed Application. The descriptive metadata. Any data the Application has stored on the device.
Chapter XIII
Consumer Device UI Management
1. Chapter Summary
This chapter defines how devices support more than one presentation technology and hence different types of Applications. It also describes how devices support the presentation to the user of multiple co-existing Applications, of the same or different content types, and how the shared device resources required by those Applications are managed. This chapter also introduces a logical component the User Interface Management Engine (UIME), and describes its features and behaviour.
1.1. Audience
This chapter is for: Device Manufacturers, who need to: Understand the concepts presented in relation to User Interface Management and implement the required functionality Integrate the open-source libraries and components specified.
This chapter is not relevant for: Application Developers ISPs Content Providers
2. Introduction
The UIME is a configurable software component which manages the User Interface and exposes UI Management functionality to Applications and other software components. The Key functions of the UIME are: Management of the lifecycle of Application Player Instances. Management of the lifecycle of Applications rendered by Application Player instances. Management of presentation properties of the multiple graphics surfaces which are created by multiple Application Player instances and drawn to by Applications. Allocation and policing of Application Player memory and CPU usage limits. Management of appropriate routing of User Input Events where multiple Applications are running concurrently. Management of the Viewer Perceived Operating State of the Consumer Device. Provision of sufficient configurability of the logic which implements the above, to support evolving in-field usage patterns.
Platform UI Applications Externally Started Application Player External Application User Input Event
Key Set Grabbed Keys Cold Starting Warm Starting System Memory Video Memory Collector Window
Input Window Active Application Application Manager Application Player Manager Window Manager
The single Window associated with an Application which is capable of receiving User Input Events. The Application which is visibly running in the foreground and whose Input Window is focussed. UIME sub-component responsible for Application state and lifecycle. UIME sub-component responsible for the processes that host Applications, and for CPU and memory resource management. UIME sub-component responsible for composition and rendering of windows for multiple Application Players.
3. UIME Overview
The UIME shall provide support for multiple Application Player instances running concurrently. These Application Player instances may be started by or independently of the UIME component. Application Player instances create Windows, and render Applications to those Windows. The UIME shall receive notifications when an Application Player process starts or stops, when an Application Player creates or destroys a Window and when an Application Player starts or stops playing an Application. The UIME shall use the Application Player, Application and Window state notifications to track which Application Players, Applications and Windows are in existence, and what the associations are between them. Application Players shall be linked against DirectFB and necessary shared system libraries, in order to provide direct access to functionality provided by those libraries.
3.1.4. UI Manager
The UI Manager sub-component provides overall User Interface control functionality, which is not specific to a particular Application instance.
3.2. Configuration
The behaviour of the Application Manager, Application Player Manager and Window Manager can be configured to support a range of user interaction models. The configuration supports optimisation of the UI Management rules for deployment to a range of devices whose graphics processing capabilities differ. The configuration also enables tuning of the logic governing graphics composition and Application lifecycle, to enable evolution in the field as understanding of audience behaviour increases and commercial models change.
Application Players are required to render graphics into DirectFB surfaces using DirectFB APIs. DirectFB stores display property and process information about its client processes, and hence Application Players. The Application Player Manager uses the SaWMan library to access this information, and this is the primary mechanism by which it is able to maintain synchronicity with those external Application Player processes. The Application Player Manager shall support the following: Lifecycle management of Application Players it starts (UIME Started Application Players) Limited lifecycle management of Application Players which are started externally to it (Externally Started Application Players)
4.2.2. Starting
For UIME Started Application Players, the following 2 launch modes shall be supported. 4.2.2.1. Warm starting The Application Player Manager shall support the starting of one or more instances of an Application Player without needing to specify which Application or Applications the started Application Player is required to render. This allows the Application Player Manager to start, but not immediately use Application Players, such that they are ready to play Applications when required. Playing an Application in a warm-started Application Player results in a faster rendering time because the Application Player initialisation has already taken place, and it is only then necessary for the Application Player to initialise and launch the Application when instructed to do so by the UIME.
4.2.2.2. Cold starting The Application Player Manager shall support the starting of Application Players as a consequence of a request to launch an Application of the corresponding type. In this mode, the Application Player shall begin rendering the Application as soon as possible after the Application Player has started.
4.2.3. Lifecycle
The Application Player Manager shall model the lifecycle of UIME started Application Players as follows:
4.2.4. Stopping
The Application Player Manager shall be responsible for the shutdown and process removal of UIME started Application Players. The Application Player Manager may stop, and subsequently remove an Application Player after an Application the Application Player was hosting terminates. In this way, memory and other resource
used by the Application Player can be reclaimed. This may typically happen prior to the device entering standby state, or when the system resources that the Application Player is consuming are required by higher priority processes.
4.3.2. Starting
The way Externally Started Application Players are started is beyond the scope of this chapter. The Application Player Manager shall support Externally Started Application Players which are started both before and after the UIME itself starts.
4.3.3. Lifecycle
The Application Player Manager shall provide a signalling mechanism to indicate to Externally Started Application Players that they are either running or suspended. When an Externally Started Application Player is in the RUNNING state, the UIME will consider requests from the Application Player to Activate Applications it is hosting. When an Externally Started Application Player is in the SUSPENDED state, the UIME will ignore requests from those Application Players to Activate Applications they are hosting, and ensure that no Application graphics rendered by those Application Players can be displayed. External Application Players shall use these signals to operate efficiently, and not attempt to render any Application graphics when in SUSPENDED state.
The Application Player Manager shall model the lifecycle of Externally Started Application Players as follows:
4.3.4. Terminating
The Application Player Manager shall not be responsible for the shutdown or process removal of External Application Players.
5. Application Manager
5.1. Introduction
The UIME shall support the concurrent running of multiple Applications. The UIMEs Application Manager sub-component manages this by controlling the lifecycle of Applications. The Application Manager shall support the following: Management of lifecycle and display of Applications it launches (UIME Launched Applications). Management of display of Applications, which are launched externally to it (Externally Launched Applications).
5.2.3. Launching
The Application Manager shall provide means to launch Applications. UIME policy shall determine whether to allow or deny Application launch requests and as such, the Application Manager shall be capable of denying requests to launch Applications should it consider the launch inappropriate. Where the Application Manager allows a request to launch an Application, it will ensure that an appropriate Application Player is running to play the Application. If an Application Player of the necessary type is not already running, the Application Manager shall cold-start one, passing the URI of the Application to launch to the Application Player. Configuration parameters determine how an Application Player executable can be associated with an Application content type. The following Application launch scenarios shall be supported: Applications whose launch is initiated by the Application Manager. Applications whose launch is initiated by direct clients of the Application Manager. Applications whose launch is initiated by other Applications.
The Application Manager shall ensure that the Active Application will have been previously associated with a focusable Window, and that Activation requests for Applications not associated with a focusable Window will be denied. The Application Manager shall ensure that User Input Events within the Active Applications Key Set will be received by the Active Applications Window at all times while that Application is Active. (See Key Sets for the definition of an Applications Key set). The Application Manager shall ensure that, when an Application is Activated, the Applications window shall be Focussed, and hence the Active Application shall be able to receive all User Input Events defined in the Applications Key Set. Whilst the Active Application shall always be running visibly in the foreground, the Application Manager shall support other Inactive Applications also running visibly in the background. The Application Manager shall ensure that Windows associated with Inactive Applications shall not be focussed, and hence Inactive Applications Shall not be able to receive User Input Events except by the mechanism described in Grabbed Keys. UIME policy shall determine whether to allow or deny Application Activation or Deactivation requests and the Application Manager shall be capable of denying requests to Activate or Deactivate Applications should it consider the state change inappropriate. An Application may become the Active Application following: The successful launch of the Application if using Immediate Activation A request to Activate the application from another software component A successful Deactivation of another Application where the Application is the highest priority alternative running Application
An Active Application may be Deactivated following : A request to Deactivate the application from another software component. A successful Activation of another Application.
Subsequent to the successful launch of an Application, the Application needs to be Activated before it can be displayed and receive User Input Events, and hence be used by the viewer. The Application Manager shall provide the following means of Activating an Application after a successful launch. Immediate Activation. In this mode, the newly launched Application shall be Activated at the point that the Application is associated with a Window and can hence be displayed. This provides support for Native Applications, and other applications which are not UIME aware, allowing them to be Activated without specifically having to request Activation. Deferred Activation. In this mode, the newly launched Application shall not be Activated until specifically requested. This supports well synchronised transitions between Applications when launching one Application from another, with the launcher Application remaining Active until such time as the launched Application specifically requests Activation, which can be deferred until that Application is fully ready to be interacted with.
This functionality is not intended to be generally available to Applications other than Platform UI Applications.
5.2.6. Termination
The Application Manager shall provide means for Applications to request termination of themselves or any running Applications they have launched. UIME policy shall determine whether to honour Application termination requests, and as such, the UIME shall be capable of denying requests to terminate Applications should it consider the termination inappropriate. Where an Application termination request is successful, the UIME shall notify the Application that termination is imminent prior to actually terminating the Application. The UIME shall provide a mechanism whereby the imminently terminating Application and, where appropriate, its launching Application are both able to respond to this signal. Applications may be terminated under the following circumstances: Following a request to terminate the Application. Following a transition to the DEVICE_STANDBY power state. Following a request to terminate the process in which the UIME runs.
5.3.3. Launching
Externally started Application Players shall be responsible for managing the launching of the Applications which they render. It follows that the UIME shall not be responsible for the launching of Applications rendered by Externally Started Application Players.
5.3.6. Terminating
Externally Launched Application Players shall be responsible for managing the termination of the Applications which they render. It follows that the Application Manager shall not be responsible for the termination of Applications rendered by Externally Started Application Players.
6. Window Manager
6.1. Introduction
The Window Manager UIME sub-component shall provide the following key functionality: Allows the UIME to become aware that new Windows have been created both in and out of process. Allows the UIME to become aware that existing Windows have been removed and destroyed. Allows other UIME sub-components to discover the current stacking and display properties of existing Windows. Allows the UIME to become aware that stacking or display properties of existing Windows have changed. Allows other UIME sub-components to request that the stacking and display properties of existing Windows are changed. Applies UIME Policy to the stacking and display properties of existing Windows. Provides the underlying mechanisms to the Application Manager to allow it to control Application Display properties and User Input Event routing.
The Window Manager shall support a single Window being associated with a single Application. Where an Application attempts to create multiple windows, the Window Manager shall only support the display of the first Window. The Window Manager shall only support multiple Windows being created by and associated with a single Application Player, where those Windows are each created with reserved Application IDs. The Window Manager shall not support Multiple Applications or Application Players being associated with a single Window.
The Window Manager shall support Externally Started Application Players that create and destroy a Window to render each Application, and Externally Started Applications Players that re-use a single Window.
The capability to apply source and destination transformations to support both the provision of magnification functionality to enable accessibility features, and the optimisation of Application display for different output screen resolutions.
6.4.1. Opacity
The Window Manager shall be capable of modifying the opacity property of any Window created, without affecting the Application or Application Player associated with the modified Window, such that the Application will continue rendering to the Window, oblivious to the property change. As a performance optimisation, the Window Manager shall ensure that, if a Windows opacity is changed such that it becomes completely transparent, then the Windows surface will no longer be composed with the surfaces of other, visible windows in the final composition buffer.
6.4.2. Colorization
The Window Manager shall be capable of modifying the colorization of any Window created, without affecting the Application or Application Player associated with the modified Window, such that the Application will continue rendering to the Window, oblivious to the change.
6.4.3. Geometry
Application Players should create windows of a size appropriate to the authored content dimensions of the Application they are required to play. The Window Manager shall be capable of modifying the source and destination geometry of any windows created, without affecting the Application or Application Player associated with the modified Window, such that those Applications will continue rendering to the Window, oblivious to the change in geometry. 6.4.3.1. Source geometry transformation The Window Manager shall allow the specification of a sub rectangle of any Window (the Source Rectangle), which can then be scaled to the Windows original dimensions as followed.
6.4.3.2. Destination geometry transformation The Window Manager shall allow the specification of a rectangle to which the entire Window can be scaled (the Destination Rectangle).
6.4.3.3. Combination of Source and Destination geometry transformations The Window Manager shall support the concurrent application of both the Source and Destination Geometry Transformations to any Window, such that the source rectangle can be stretched to fill the destination rectangle.
7. UI Manager
7.1. Introduction
The UI Manager sub-component provides overall User Interface control functionality, which is not specific to a particular Application instance.
8. Configuration
The UIME shall support the configuration of the behaviour of aspects of its constituent components, namely the Application Player Manager, Application Manager and Window Manager.
9. Resource Management
9.1. Management of available memory
Application Players store the pixel information of their Window surfaces, and any interim surfaces used to make up the composition of those Window surfaces in System and Video memory. The UIME shall monitor the amount of System and Video Memory available, and the amount used by each Application Player. Where an Application Player attempts to create a new Window, the UIME shall ensure there is sufficient Video Memory available for Window creation, and sufficient headroom in memory availability to allow the Application which will write to the new Window to create sufficient interim graphics buffers from which the Window surface will be composed. Where the UIME determines that the memory availability is insufficient, the following remedial options shall be supported: Denial of Window creation, and subsequent termination of the associated Application Player Instructing an Application Player to terminate any number of its Applications, in order to free sufficient memory to allow the new Application Player to render its Application Termination of another Application Player, freeing sufficient memory to allow the new Application Player to render its Application
The UIME shall support the configuration of System and Video Memory allocation limits which can be applied separately to each Application Player Instance. The UIME shall provide support to allow an Application Players System and Video Memory limits to be modified during the lifecycle of the Application Player Instance. The UIME shall be capable of terminating, or suspending any Application Player that exceeds the configured memory allocation limit. The UIME shall be capable of notifying Applications and Application Players of memory usage.
Chapter XIV
Presentation Engine Integration
1. Chapter Summary
This chapter specifies how to integrate presentation engines into the wider software architecture for YouView devices.
This chapter is not relevant for: Application Developers ISPs Content Providers
3. Graphics Acceleration
3.1. DirectFB integration
Presentation engines shall create a DirectFB Window to render graphics. The Window shall be configured with the DSCAPS_PREMULTIPLIED surface capability. ARGB pixel format shall be used. The presentation engine shall create a window sized according to the authored dimensions of the application (where this is defined), subject to a maximum size specified when the presentation engine is launched. The presentation engine shall not manipulate any DisplayLayer directly. The presentation engine shall notify the graphics subsystem before and after any update to the DirectFB window surface. Notification of changed regions shall be provided. Note: for performance reasons, it is desirable that the regions notified as having been updated are as small as possible covering only areas that have changed, except where the number of changed regions would become excessive. Rendering of graphics into the DirectFB window need not use only the specified DirectFB APIs if other approaches achieve greater performance. For example, where a platform has OpenGL ES support, performance may be improved by using this for rendering. However, for integration with the wider graphics architecture, window updates must be notified using the specified DirectFB APIs. The sequence diagram below shows the way in which the presentation engine must use the DirectFB APIs when redrawing its window. Before making any changes, presentation engines must call IDirectFBWindow::BeginUpdates() so that window composition is synchronised in cases where multiple windows (possibly from separate processes) are being updated at the same time. So that the window stack can be composed efficiently, window updates to non-overlapping regions must be made with separate calls to IDirectFBWindow::Flip using the DSFLIP_QUEUE flag. When a complete frame of updates has been made, DirectFB must be notified using the DSFLIP_FLUSH flag and the screen composition lock released using DSFLIP_ONCE. The sequence diagram shows this happening in a separate Flip() call with a NULL region. Alternatively, these flags can be included in the Flip() call of the final window update. Use of the DSFLIP_QUEUE and DSFLIP_FLUSH flags is not required if only one rectangular region needs to be updated to create a particular output frame.
Figure 41
Where a bitmap is known to be fully opaque, DSBLIT_BLEND_ALPHACHANNEL may be omitted to improve performance. However, DirectFB implementations are not required to be able to ignore an alpha channel when rendering ARGB bitmaps so the alpha values in the source surface must still be correct.
Graphics operation Bitmap rendering with a colour transform where the colour components are unaffected but the alpha value is multiplied by a factor between 0.0 and 1.0 inclusive.
DirectFB mapping Blit/StretchBlit using DSBLIT_BLEND_ALPHACHANNEL and DSBLIT_BLEND_COLORALPHA with alpha=alphaMultiplier * 255 DSBLIT_SRC_PREMULTIPLY shall also be set if the source pixel data is not in pre-multiplied form. DSBLIT_SRC_PREMULTCOLOR shall be set otherwise. Blit/StretchBlit using DSBLIT_BLEND_ALPHACHANNEL and DSBLIT_BLEND_COLORIZE with the red, green and blue colours set according to colour=colourMultiplier * 255 DSBLIT_SRC_PREMULTIPLY shall also be set if the source pixel data is not in pre-multiplied form. Blit/StretchBlit using DSBLIT_BLEND_ALPHACHANNEL, DSBLIT_BLEND_COLORALPHA and DSBLIT_BLEND_COLORIZE with the alpha, red, green and blue values set according to alpha=alphaMultiplier * 255 colour=colourMultiplier * 255 DSBLIT_SRC_PREMULTIPLY shall also be set if the source pixel data is not in pre-multiplied form. DSBLIT_SRC_PREMULTCOLOR shall be set otherwise. Mapping to DirectFB not required. Implementations may optionally implement a two-pass rendering of these cases using DirectFB to improve performance.
Bitmap rendering with a colour transform where one or more colour components are multiplied by factors between 0.0 and 1.0 inclusive and the alpha value is unaffected.
Bitmap rendering with a colour transform where one or more colour components and the alpha value are multiplied by factors between 0.0 and 1.0 inclusive.
Bitmap rendering with a colour transform which includes any offset value or which has negative coefficients.
In all cases, the blending mode shall be DSPD_SRC_OVER (srcBlend: DSBF_ONE, dstBlend: DSBF_INVSRCALPHA).
4.2. Presentation
If a presentation engine supports full screen video, it shall provide the capability to present the video in primary mode as described in section 4.2.1. If a presentation engine supports presentation of more than one video stream concurrently, the presentation engine shall additionally support the requirements of window mode presentation as described in section 4.2.2. If a presentation engine supports audio-only content, it shall be presented as described in section 4.2.3. Requirements for mixing presentation engine audio with other sources of audio are described in Chapter II Consumer Device Platform.
Window mode may be implemented using a DirectFB window to compose the window onto the main graphics plane, or it may use an additional video or graphics plane where the hardware has a suitable plane available. For both the primary and window modes of presentation, the presentation engine shall support positioning and rectangular masking of the video frame. All positions shall be interpreted as relative to the applications co-ordinate system.
5. User Input
User input events for all presentation engines are handled through DirectFB. Presentation engines receive Window events for key presses via the standard DirectFB event handling mechanisms. Chapter II Consumer Device Platform defines the way in which specific key events are mapped into the DirectFB system. Different presentation engines will have different requirements for key events. However, it is important that all applications can be controlled both from the devices standard input device and from keyboards or other accessories. The following table lists the DirectFB key codes that need to be mapped to the presentation engines internal representation for particular key functions. In many cases, there is more than one DirectFB key code that the presentation engine must recognise for a particular key function. Key symbols with a cross in the R column are those that may be found on a standard remote control; symbols with a cross in the K column are those produced by a USB keyboard or a remote control with alphanumeric keyboard. Note that some key mappings are conditional on the state of the ALT key modifier.
DirectFB key symbols
DIKS_POWER DIKS_POWER2 DIKS_SMALL_P DIKS_CAPITAL_P DIKS_ESCAPE DIKS_GOTO DIKS_MENU DIKS_EPG DIKS_SMALL_E DIKS_MUTE DIKS_HELP DIKS_CURSOR_UP DIKS_CURSOR_DOWN DIKS_CURSOR_LEFT DIKS_CURSOR_RIGHT DIKS_OK DIKS_ENTER X X X X X X X X X X X X X X X X X X X ALT modifier set
R
X X
Conditions
Key function
Standby: toggle Standby: power on
Notes
X X X X X
Standby: toggle Standby: power on Close Search Menu Guide Guide Mute Help Up Down Left Right OK/Enter OK/Enter DIKS_RETURN and DIKS_ENTER are the same code
DIKS_BACK DIKS_SMALL_K DIKS_VOLUME_UP DIKS_VOLUME_DOWN DIKS_CHANNEL_UP DIKS_PAGE_UP DIKS_CHANNEL_DOWN DIKS_PAGE_DOWN DIKS_TEXT DIKS_SMALL_T DIKS_INFO DIKS_SMALL_I DIKS_ZOOM DIKS_SMALL_Z DIKS_REWIND DIKS_FASTFORWARD
X X X X X X X X X X X X X X X X ALT modifier set ALT modifier set ALT modifier set X X ALT modifier set
Back Back Volume up Volume down Channel up Channel up Channel down Channel down Text Text Info Info Zoom Zoom Rewind Fast forward
R
X X X X X X
Conditions
Key function
Stop Record Play Pause Skip forward Skip backward
Notes
X X X X X X X X X X X X X X X X X X
Rewind Fast forward Stop Record Play Pause Skip forward Skip backward Red Green Yellow Blue
Key identifier is in range DIKI_KP_0 DIKI_KP_9 and ALT modifier not set Key identifier is in range DIKI_0 DIKI_9 (i.e. main keyboard variants)
0 to 9
Remote control keypad and emulation Direct number key entry (i.e. do not apply multitap processing)
DIKS_0 DIKS_9
0 to 9 alternative mappings to be included if the presentation engine can support two sets. Audio description
DIKS_AUDIO DIKS_SMALL_A DIKS_SUBTITLE DIKS_SMALL_S DIKS_F1 DIKS_F9 DIKS_CUSTOM0 DIKS_CUSTOM9 DIKS_F1 DIKS_F9 DIKS_CAPITAL_A DIKS_CAPITAL_Z DIKS_SMALL_A DIKS_SMALL_Z DIKS_SPACE DIKS_PARENTHESIS_RIGHT DIKS_EXCLAMATION_MARK DIKS_AT DIKS_NUMBER_SIGN DIKS_DOLLAR_SIGN DIKS_PERCENT_SIGN DIKS_CIRCUMFLEX_ACCENT DIKS_AMPERSAND
X X X X X X ALT modifier set ALT modifier not set ALT modifier set
Audio description Subtitle Subtitle Defined optional keys Dual function: also delete
Manufacturer extensions X X X X X X X X X X X X ALT modifier set ALT modifier not set ALT modifier not set ALT modifier not set Manufacturer extensions A to Z a to z Space ) ! @ # $ % ^ & Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported
K
X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
Conditions
Key identifier is DIKI_8 and ALT modifier not set Key identifier is DIKI_KP_MULT and ALT modifier not set ALT modifier not set
Key function
* * ( : ;
Notes
Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported
Key identifier is DIKI_EQUALS_SIGN and ALT modifier not set Key identifier is DIKI_KP_PLUS and ALT modifier not set ALT modifier not set
+ + = < , _
Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported
Key identifier is DIKI_MINUS_SIGN and ALT modifier not set Key identifier is DIKI_KP_MINUS and ALT modifier not set ALT modifier not set Key identifier is DIKI_PERIOD and ALT modifier not set Key identifier is DIKI_KP_DECIMAL and ALT modifier not set ALT modifier not set Key identifier is DIKI_SLASH and ALT modifier not set Key identifier is DIKI_KP_DIV and ALT modifier not set ALT modifier not set
Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported Where alphabetic entry supported
6. Fonts
Presentation engines shall include support for rendering fonts through the DirectFB FontProvider mechanism. The set of fonts shall include the FS Me family of fonts in Light, Regular Bold and Heavy weights, each in normal and italic variants.
7. Networking
Presentation engines shall support HTTP and HTTPS (HTTP over TLS) connections using the libcurl shared library required by Chapter II Consumer Device Platform. Caching shall be enabled for HTTP requests made by the presentation engine. Use of HTTP proxies shall be supported. The presentation engine shall support the use of the platform-specified bundle of HTTPS root certificates for server authentication. The presentation engine shall provide a mechanism for other software to be aware of the TLS connection state during the establishment of a secure connection and based on this information, to determine whether the connection should be allowed to be established. The presentation engine shall support client authentication for HTTP requests. Implementations shall use such interfaces and mechanisms necessary to protect the confidentiality of keys. The presentation engine shall add an x-youview-appid extension header field to all HTTP requests made over TLS using the presentation engines normal HTTP request APIs. The header shall not be included in HTTP requests that are not carried over a secure TLS connection. The x-youview-appid header consists of the fully-qualified application ID for the application that made the request, prefixed by a field indicating whether the application was signed by a developer or was signed for deployment. The header is formally defined as follows: X-YouView-Appid appid-value appid appchar Examples: x-youview-appid: dev:com.example.HelloWorld x-youview-appid: youview:com.example.LivePlayerApp The presentation engine shall not allow applications to set headers with header field names that begin with the string x-youview. As HTTP header field names are case insensitive, a case insensitive comparison must be used to ensure this. Presentation engines shall maintain a separate store of persistent cookies for each application. Applications shall not have access to the cookies of other applications. = "x-youview-appid" ":" appid-value = ( "dev" | "youview" ) ":" appid = 1*212appchar = ALPHA | DIGIT | "." | "-"
Chapter XV
MHEG Integration
1. Chapter Summary
Chapter XIV Presentation Engine Integration defines the way in which presentation engines must be integrated for YouView devices. MHEG is tightly integrated with broadcast functionality and is an existing well-established technology on set top box devices. This chapter describes how the integration requirements for the MHEG engine differ from those for other presentation engines. Unless otherwise stated, all the requirements of Chapter XIV Presentation Engine Integration also apply to MHEG. This chapter covers only integration aspects for MHEG. See also Chapter VIII Broadcast Content Delivery.
This chapter is not relevant for: Application Developers ISPs Content Providers
2. Graphics Acceleration
2.1. DirectFB integration
The generic DirectFB integration requirements described in Chapter XIV Presentation Engine Integration apply with the following exceptions and additions: ARGB4444 pixel format shall be used. The MHEG engine shall create a 1280x720 pixel window. The MHEG engine may create at most one additional intermediate surface of the same size to implement LockScreen, plus other surfaces requiring up to a maximum of 10 Mbytes of video memory. The MHEG engine shall set the ApplicationID of its DirectFB Window to 0x80000001 using the IDirectFBWindow::SetApplicationID() DirectFB API.
3. Channel Tuning
The MHEG engine shall support the LifecycleExtension, as described in DTG D-Book 7 Part A v1, section 16.2.9. Note: because of the difference between quiet and normal tunes, the receivers currently selected channel may not be the same as the viewer service context i.e. the channel presented by the receivers UI.
4. User Input
4.1. DirectFB to MHEG key mappings
The table below defines the required mappings between DirectFB key events and MHEG UserInputEventTag values based on the requirements of Chapter XIV Presentation Engine Integration. Key symbols with a cross in the R column are those that may be found on a standard remote control; symbols with a cross in the K column are those produced by a USB keyboard or a remote control with alphanumeric keyboard. In some cases, multiple key symbols are mapped to the same MHEG key to ensure that all device functions are available from USB key events as well as remote control key events.
DirectFB key symbols DIKS_CURSOR_UP DIKS_CURSOR_DOWN DIKS_CURSOR_LEFT DIKS_CURSOR_RIGHT DIKS_OK DIKS_ENTER, DIKS_RETURN DIKS_BACK DIKS_SMALL_K DIKS_TEXT DIKS_SMALL_T DIKS_REWIND DIKS_FASTFORWARD DIKS_STOP DIKS_PLAY DIKS_PAUSE DIKS_PLAYPAUSE DIKS_NEXT DIKS_PREVIOUS DIKS_4 DIKS_6 DIKS_2 DIKS_5 DIKS_1 DIKS_3 DIKS_9 DIKS_7 DIKS_RED DIKS_GREEN DIKS_YELLOW DIKS_BLUE X X X X X X X X X X X X X X X X X X X X Key identifier is in range DIKI_KP_0 DIKI_KP_9 and ALT modifier set X X ALT modifier set X X ALT modifier set R X X X X X X K X X X X Conditions MHEG UserInput value Up (1) Down (2) Left (3) Right (4) Select (15) Select (15) Cancel (16) Cancel (16) Text (104) Text (104) Rewind (126) Fast Forward (125) Stop (120) Play (121) Pause (122) Play/Pause (127) Skip Forward (123) Skip Back (124) Rewind (126) Fast Forward (125) Stop (120) Play (121) Pause (122) Play/Pause (127) Skip Forward (123) Skip Back (124) Red (100) Green (101) Yellow (102) Blue (103)
K X X X X
Conditions ALT modifier set ALT modifier set ALT modifier set ALT modifier set Key identifier is in range DIKI_0 DIKI_9 or ALT modifier not set
MHEG UserInput value Red (100) Green (101) Yellow (102) Blue (103) 5-14
5. Networking
The MHEG engine shall support the InteractionChannelExtension and ICStreamingExtension. The MHEG engine shall support HTTP and HTTPS (HTTP over TLS) connections using the libcurl shared library required by Chapter II Consumer Device Platform. TLS client authentication and the x-youview-appid header are not required for MHEG. Use of HTTP proxies shall be supported.
6. Application Lifecycle
While the MHEG engine is running, it will either present MHEG applications on-screen or will present a full-screen transparent window, depending on whether MHEG presentation is currently visible. See also section 2.4. The following sections describe how the MHEG engine interacts with the UI Management Engine (UIME) described in Chapter XIII Consumer Device UI Management.
Chapter XVI
Consumer Device Local Diagnostics
1. Chapter Summary
This chapter specifies the Local Diagnostics support to be built into YouView consumer devices. Local Diagnostics will provide viewers with a summary of important device information and operational behaviour, which will allow them to either solve problems themselves (self-help) or be guided to the appropriate information by contextual help, a customer support agent or the YouView customer support website. This will empower the more technically savvy viewers to help themselves, and give customer support agents and website tools the information needed to diagnose, correct, and document commonly occurring issues. Local Diagnostics will be provided as part of the Platform Main UI application and accessible through the Settings menu and/or the Help Area. This chapter describes the Local Diagnostic capability for a range of functional areas. This chapter does not provide a detailed definition of the on-screen appearance of each area of Local Diagnostics capability.
This chapter is not relevant for: Content Providers Application Developers ISPs
Chapter XVII
Consumer Device Usage And Error Reporting
1. Chapter Summary
This chapter describes the support for usage and error reporting to be built into the YouView consumer device. This includes both the capture and backhaul of such data. Note: Nothing in this specification precludes the direct capture of usage and error reporting by a Content Providers Application and backhaul to an HTTP end-point of their own. Section 2 introduces the concepts of usage and error reporting. It also raises a number of privacy considerations which shall be observed. Note: The handling, storage and use of usage and error data once it has been backhauled so as to ensure that data protection and privacy obligations are met is beyond the scope of this specification. Section 3 describes the usage and error reporting architecture, which has the logical component Usage Collection Agent (UCA) at its core.
2. Overview
2.1. Usage data
Usage data is a collective term for any information related to viewer activity on a consumer device. The vast majority of this data will be generated by the normal, expected operation of the device. Usage data is comprised of usage events, which in turn relate to discreet user activities and settings information stored on the device in some common configuration repository. The following examples of usage events are provided to illustrate user activities and settings/configuration information. A single on-demand asset viewing is seen as a series of events: On-demand Asset is selected to Play Player Application is launched. Buffering of On-demand Asset begins Buffer size reaches sufficient size to enable playback of an on-demand asset On-demand Asset playback begins On-demand Asset playback is paused by the viewer Playback speed set to 0x
Usage data provides a benefit to both the viewer and the platform in many ways, including: Identifying parts of the user interface that are well used and hence potential areas for further enhancement. Identifying parts of the user interface that are not well used and hence potential areas for changing or removing. Identifying navigational paths through the user interface that are well used and hence potential areas for optimisation, e.g. remove the number of button clicks. Determining the rate at which a new version of Core Device Software or Platform Software is upgraded to by the population of devices in the field. Generating most popular lists for use within the Platform Main UI.
Further benefit can be derived when usage data is combined with information about the device from which it was backhauled, e.g. model, software version etc. This can be used to determine problems specific to a particular device model or software release.
Error data can provide value in a raw form. However, as with usage data, further benefit can be derived when error data is combined with information about the device from which it was backhauled, e.g. model, software version etc.
The implementation of support for usage and error data capture and export on the device needs to be robust and secure. Hence: It shall not be possible for third-party software (including Content Provider Applications) to be able to access and export usage and error data. It shall not be possible for any person with physical access to the device to be able to extract usage and error data using trivial means.
In both cases this applies to usage and event data in any form. Finally, it shall be possible for the viewer to securely clear any usage and event data held on the device. The handling, storage and use of usage and error data once it has been backhauled so as to ensure that data protection and privacy obligations are met is beyond the scope of this specification.
3.3. Exporters
It shall be possible to configure the UCA to send usage and error data to a number of Exporters on the device. Each Exporter can be tailored to the requirements of a specific external interface. These interfaces are the gateways through which usage data can be backhauled from the device to some remote server. The Exporter is a translator between the internal private format used by the UCA for the capture and temporary storage of usage and error data, and the format which is expected by the endpoint exposed by the backhaul interface.
Chapter XVIII
Annexes
Annex A
Related Documents
DTG, D-Book, 7 Part A v1, March 2011. ETSI TS 102 578, PowerLine Telecommunications (PLT); Coexistence between PLT Modems and Short Wave Radio broadcasting services, v1.1.8, 2007-08. IEEE 802.3, LAN/MAN CSMA/CDE (Ethernet) Access Method, 2008. IEEE 802.11g, Wireless Local Area Networks, 2003. IEEE 802.11n, Wireless Local Area Networks, 2009. IETF RFC 1738, Uniform Resource Locators (URL), December 1994 IETF RFC 2109, HTTP State Management Mechanism, February 1997. IETF RFC 2616, Hypertext Transfer Protocol -- HTTP/1.1, June 1999. IETF RFC 2617, HTTP Authentication: Basic and Digest Access Authentication, June 1999. IETF RFC 3376, Internet Group Management Protocol, Version 3, October 2002. IETF RFC 3986, Uniform Resource Identifier (URI): Generic Syntax, January 2005 IETF RFC 3852, Cryptographic Message Syntax (CMS), July 2004. IETF RFC 4294-bis, IPv6 Node Requirements, March 2010. IETF RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile, May 2008 IEC 62455, Internet protocol (IP) and transport stream (TS) based service access, June 2007. W3C, HTML 4.01 Specification, December 1999 W3C Recommendation, Timed-Text Markup Language (TTML), v1.0, November 2010. Marlin Developer Community, Marlin Simple Secure Streaming Specification, version 1.0. Marlin Trust Management Organisation, Marlin Client Adopter agreement.
Annex B
Term Access Services Application Application Player Core Device Software DVR EPG ISP Platform Platform Applications Platform Configuration Platform Main UI Platform Software Platform UI Applications System Memory Video Memory VOD
Glossary
Explanation Subtitles and audio description Packaged and managed software that provides functionality to consumers Runtime environment for the execution of Applications. Examples could be Flash player, MHEG engine, W3C browser Software that is managed by the Device Manufacturer Digital Video Recorder records television broadcasts to local storage such as a hard drive Electronic Programme Guide an on screen listings for scheduled television Internet Service Provider an organisation that provides a user with access to the Internet The environment in which the device is deployed Applications that are managed by the Platform Operator or Device Manufacturer Describes a set of parameters that allow the behaviour of the device to be configured after deployment without requiring the software to be updated The Application that provides the core user interface for the device Software that is managed and updated by the Platform Operator Applications that are managed by the Platform Operator General purpose memory available to the Application CPU that may be used to hold graphics Memory accessible directly by graphics acceleration and graphics display hardware Video On Demand the facility to permit a viewer to watch/listen to video/audio on demand