0% found this document useful (0 votes)
21 views

Boitard14hdr Colorspace

Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut.

Uploaded by

gamarti2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Boitard14hdr Colorspace

Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut.

Uploaded by

gamarti2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Evaluation of Color Encodings

for High Dynamic Range Pixels


Ronan Boitarda,b and Rafal K. Mantiukc
a Technicolor, 975 Av. des Champs Blancs, 35576 Cesson-Sevigne, France
b IRISA, 263 Avenue du Général Leclerc, 35000 Rennes, France
c School of Computer Science, Bangor Universtity, United Kingdom

ABSTRACT
Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which
does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future
imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut
can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack
perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most
LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued
representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding
colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both
luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum
bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual
uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant
difference can be observed between color difference encoding schemes, from which YDu Dv encoding seems to be
the most efficient.
Keywords: Quantization Artifacts, HDR , Color Difference Encoding, Bit-Depth, Perceptual Transfer Function

1. INTRODUCTION
Traditional Low Dynamic Range (LDR) color spaces, such as BT.7091 , encode a small fraction of the visible
color gamut for luminance values ranging from 0.1 to 100 cd/m2 . These color spaces cannot represent the
range of colors and luminances that the human vision system can perceive. High dynamic Range (HDR) imagery
aims at overcoming these limitations by capturing, storing and reproducing all that the human eye can perceive2 .
However current HDR pixel representation format uses floating point values which are inefficient to encode. They
also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by
most LDR color spaces. Therefore, there is a need to devise an efficient and perceptually uniform integer-valued
representation for HDR pixels.
Following the interest that HDR video brought in recent emerging technology shows (CES, NAB, etc.),
several standardization groups, such as the Society of Motion Picture and Television Engineers (SMPTE) and
the Motion Picture Expert Group (MPEG), are currently working on HDR pixel representation for production
and international program exchange. Their primary focus is on deriving color difference encodings that fit
the requirement of video compression standard. Such requirements include encoding both the luminance and
chrominance channels in a approximately perceptually uniform space and representing HDR content with a
minimum bit-depth without impairing its visual quality. Based on psychophysical studies3, 4 , several Perceptual
Transfer Functions ( PTFs) have been proposed to encode perceptually HDR luminances5 . However, less work
has been devoted to encoding chrominance channels that can represent the full visible gamut.
In this paper, we evaluate the minimum bit-depth at which quantization artefacts remain invisible for any
luminance level. Unlike other studies6 , our experiments test both luminance and chrominance encoding in a
Further author information: (Send correspondence to Ronan Boitard)
Ronan Boitard: E-mail: [email protected]

1
rigorous 4AFC threshold experiments to determine the minimum bit-depth required. In Section 2, we present
the difference between LDR and HDR color pixel encodings before describing the evaluated HDR color difference
encodings. Then we describe our experiments along with the result in Section 3. Finally we conclude our paper
in Section 4.

2. BACKGROUND
In this section, we describe LDR color pixel encodings along with their limitations to represent HDR color pixels.
Then we present several proposition to encode HDR color pixels, some of them still under development.

2.1 LDR Color Pixel Encodings


Digital color pixels can be represented in two ways: additive color system which combines several primaries,
usually three: Red, Green and Blue (RGB) and luminance/chrominance decomposition which removes the
luminance information from the chrominance channels. LDR color pixels are represented using integer values
whose distribution is optimized for human observers. Thus, images are encoded in a perceptually linear domain
which has two goals: removing information that would be invisible after decoding (visual noise) and optimizing
the limited bit-depth to minimize the visual loss due to quantization. Perceptually encoded and quantized
luminance channel is denoted luma while chrominance channels chroma.
In LDR imagery, the perceptual encoding is performed by a non-linear transfer function called gamma
encoding7 and was designed through psychophysical studies for luminance level ranging from 0.1 to 100 cd/m2
(capacity of CRT display technology). The gamma encoding is either applied on the three color channel (non-
constant luminance) or only to the luminance channel (constant luminance). The High Definition TeleVision
(HDTV) for production and exchange recommendation is described by the ITU-R Recommendation BT.7091
(also known as Rec.709). This recommendation describes the location of the three primaries, the white point, a
bit-depth (8 or 10 bits) and a luma/chroma decomposition ( YCb Cr ). The color gamut boundaries and sampling
of such a representation depends on the location of its primaries and on the bit-depth respectively. Note that
the YCb Cr representation is commonly referred to as a color difference encoding because it adds an offset to the
chroma channel so as to center it in the middle of the used bit-depth. Color difference encoding are favored for
video compression as they remove correlation between the luma and chroma channels.
Due to recent improvement in display technologies (local dimming, OLED, etc.), this standard has become
outdated and hence another recommendation has been drafted in 2012 (BT.20208 ). This recommendation
enlarges both the color gamut and the bit-depth (10 and 12 bits) but the gamma encoding function remains
the same. It also supplies an updated transformation matrix for the luma/chroma decomposition. Figure 1
illustrates both the proportion of the full visible gamut that the BT.709 and BT.2020 color spaces cover along
with a representation of the Cb Cr plane.

2.2 HDR Color Pixel Encodings


Despite the improvement brought by the BT.2020, these encodings still cannot encompass all the color gamut
and luminance ranges that the human eye can achieve. HDR imagery, through the use of the CIE 1931 XYZ color
space9 and floating point values, matches and can even surpass the human vision system limitations2 . However,
such a representation requires too much in term of storage capacity, computations time and throughput to be
considered for consumer devices. Furthermore, image and video processing are devised to process integer valued
images. To this end, perceptual encoding functions are used to convert floating point physical values to integer
perceptually encoded values, commonly referred to as Perceptual Transfer Functions ( PTFs) or Electro-Optic
Transfer Functions (EOTFs).
One of the first perceptually uniform coding of HDR luminance was derived from the threshold versus
intensity (t.v.i.) models10 . The derivation involved rescaling luminance values so that the difference in code
values corresponded to the detection threshold throughout the entire encoded range of luminance. The encoding
was shown to require between 10 and 11 bits to encode the range of luminance from 10−4 cd/m2 to 109 cd/m2 .
However, these numbers were based on the visual models rather than actual measurements made with images
or video. In later work, the peaks of the contrast sensitivity function were used instead of the (t.v.i.) to derive

2
Figure 1. Left: CIE 1931 xy chromaticity diagram9 with the primaries and boundarie of the color gamut of the BT.7091 ,
BT.20208 and the NEC PA241w display. Both recommendations use Illuminant D65 as white point. Right: Cb Cr chroma
plane at constant luminance.

PTF Equation
PU-HDRVDP14, 15 lookup table
m1 m2
Perceptual Quantizer (PQ)6 V = c1+c
2L +c1
m
3L 1

 Lγ if L ≤ f
Gamma-Log16 V=
 a · log(L + b) + c if L > f
1
(ρ−1)L γ
Lmax
Rho-Gamma V = log(1 + log(ρ) )
Log-Linear V = log10 (L)

 max(0, e · L + f ) if L ≤ f
Arri Alexa V=
 c · log10 (a · L + b) + d if L > f

S-Log V = (a · log10 (b · L + c) + d) + e
4
PU based on Barten’s CSF lookup table
Table 1. PTFs considered with the corresponding equations. L is the HDR luminance in cd/m2 while V is the luma
(perceptually encoded luminance). The results of the PTFs are normalized to the range [0;1].

the luminance encoding11 . Another example of luminance encoding ( PTFs) is the Perceptual Quantizer (PQ)6 ,
which was derived using a similar procedure as in11 , but from a different CSF model. This encoding was reported
to require less than 10 bits to represent natural images without visual loss, however, smaller luminance range
from 10−3 cd/m2 to 104 cd/m2 was considered. Several such PTFs, some of them currently considered in the
ad hoc MPEG group on High Dynamic Range and Wide Color Gamut Content Distribution12 , are listed in
Table 1. Figure 2 plots the luminance encoding of values ranging from 0.005 to 10,000 cd/m2 on 12 bits along
with the associated maximum quantization error. The optimum encoding should require the smallest number of
bits per pixel and at the same time it should minimize the visibility of contouring artefacts due to quantization
into integer values. The PQ has been standardized by the SMPTE13 and is intended to enable the creation of
video images with an increased luminance range. Note that the studied PTFs (and color difference encoding)
correspond to display-referred encodings, that is to say encodings that aim at avoiding quantization artifacts
when a content is reproduced on a display.

3
4500 4500
PU−Bangor Log−Linear
PQ ArriAlexa
4000 Gamma−Log 4000 S−Log
Rho−Gamma Barten
3500 3500

3000 3000
Encoded 12 bit value

Encoded 12 bit value


2500 2500

2000 2000

1500 1500

1000 1000

500 500

0 0
0.01 0.1 1 10 100 1000 10000 0.01 0.1 1 10 100 1000 10000
2 2
Luminance [cd/m ] Luminance [cd/m ]
10 10

1 1

Quantization error with 12−bit encoding


Quantization error for 12−bit encoding

0.1 0.1

← c.v.i. = δ (L)/L ← c.v.i. = δ (L)/L

0.01 0.01

0.001 0.001

0.0001 0.0001
0.01 0.1 1 10 100 1000 10000 0.01 0.1 1 10 100 1000 10000
Luminance [cd/m2] Luminance [cd/m2]

Figure 2. Top: luminance value encoded with different PTFs on 12 bits. Luminance range considered: [0.005;
10,000] cd/m2 . Bottom: quantization error for 12 bit-encoding. The bottom plots correspond to the legend of the
top plots. The contrast versus intensity (c.v.i.) function has been added in the two bottom plots (yellow continuous
line).

Regarding color difference encoding, three main approaches have been formulated so far: the BT.2020
YCb Cr 8 , YDz Dx and YDu Dv . The YCb Cr encoding cannot represent the full visible gamut and is obtained
by converting RGB tri-stimulus value using the transformation matrix described in the BT.2020 recommenda-
tion8 . The YDz Dx converts pixels represented in the CIE 1931 XYZ color space9 using a transformation matrix
currently considered by the SMPTE17 . Finally, the YDu Dv is based on the CIE Lu0 v0 color space18 . Table 2
summarizes the transformation for the color difference encoding considered in this paper. The quantization on
a targeted bit-depth follows the Recommendation ITU-R BT.136119 :

(219Y + 16) · 2n−8


   
DY
DCa  = IN T  (224Ca + 128) · 2n−8  , (1)
DCb (224Cb + 128) · 2n−8

where Y/DY and Ca /DCa , Cb /DCb represents the unquantized/quantized luma and chroma channels respec-
tively while n is the bit-depth.

4
Representation Input Y Ca Cb
B−Y R−Y
YCb Cr RGB Y = 0.2627R + 0.6780G + 0.0593B Cb = 1.8814 Cr = 1.4746
2763 2741
2800 Z−Y X− 2763 Y
YDz Dx CIE XYZ Y=Y Dz = 2 Dx = 2
u0 v0
YDu Dv CIE L0 u0 v0 Y=L Du = 0.62 − 0.5 Dv = 0.62 − 0.5
Table 2. Transformation matrix for the YCb Cr , YDz Dx and YDu Dv color difference encodings.

3. EXPERIMENTS
The goal of this paper is to evaluate the different PTFs and color difference encoding considered in MPEG
activities with respect to two criterions: the minimum bit-depth and the perceptual uniformity. The minimum
bit-depth is the value at which quantization artefacts remain invisible for any luminance level. Perceptual
uniformity is achieved if a small perturbation to a component value is approximately equally perceptible across the
range of values (or luminance)10, 20 . Perceptual uniformity is an important criterion for every digital processing
that rely on a weighting average of pixel values, for example filtering, motion estimation, distortion metric,
etc. For example, HDR pixel values are linear physically (linearly related to the luminance) but are not linear
perceptually (non-linearly related to brightness). If we consider a difference of 10 cd/m2 at 10 cd/m2 and 1000
cd/m2 , obviously the perceived difference at 10 cd/m2 is higher than at 1000 cd/m2 . However, a distortion
metric, such as the Sum of Absolute Difference (SAD), would weigh both distortion similarly, hence giving the
same importance to both distortion although the perceived difference is not the same.
To assess the minimum bit-depth and perceptual uniformity, we conducted a series of psychophysical experi-
ments. This section first describes the experimental procedure before reporting and analyzing the results of our
experiments.

3.1 Experimental Procedures


In all our experiments, the observers were presented four patches with smooth gradients in which only one was
quantized at a targeted bit-depth. Examples of such a patches are shown in Figure 4. On the left, such a smooth
gradient, presented on a gray background of the same luminance as the average luminance of the patch, provides
possibly conservative case for the detection quantization artefacts. The artefacts are less likely to be visible in
complex images and video, in which contrast masking, glare and small size of the features and motion reduce
the detection thresholds. Note that the slope of the gradient may affect the detectability of the quantization
artefacts. For that reason, we experimented with different slopes and used the one that maximized the visibility
of banding or contouring. On the right of Figure 4, two chrominance gradients have been added on top of the
luminance one.
The observers were asked to select one of the four patches that appeared different from the others (4AFC)21 .
The NEC PA241W display was addressed using a 10-bit signal whose bit-depth was further enhanced to 12-bits
by spatio-temporal dithering. The average luminance of the patches varied from 0.05 to 150 cd/m2 . We assumed
that the measurements above 150 cd/m2 can be extrapolated as the Contrast Sensitivity Function (CSF)4 does
not vary much above that level. To achieve luminance levels lower than 5 cd/m2 , the observers wore a pair goggles
with attached neutral density filters (Kodak Wratten 96, 1.0D and 2.0D). The gamut of the NEC PA241W display
is plotted on top of the BT.709 and BT.2020 in Figure 1. The QUEST procedure22 was used to determine the
detection threshold in terms of fractional numbers of bit used for encoding.

3.2 Experiment 1: Luminance Encoding


As human observer are more sensible to luminance than color variations, introducing artefacts in the luminance
channel has more impact than in the chrominance ones. Furthermore, most image and video processing are
performed on the luminance, for example, in video compression, the motion estimation is only performed on the
luma channel and the same motion vectors are used to compensate both chroma channels. These processing
rely on perceptual uniformity. Hence having an encoding that is not perceptually uniform will impair the

5
11.5

11

10.5

Minimum bit−depth
10

9.5

9
PU−HDRVDP
PQ
8.5 Gamma−Log
Log−Linear
8
0.05 0.5 5 50 150
Luminance [cd/m2]
Figure 3. Minimum bit-depth required to encode HDR luminance values using several PTFs. For improved legibility the
location of the error bars have been shifted along the x-axis. The error bars denote 95% confidence intervals describing
the variance between observers. Note that, even though fractional bit quantization was measured in the experiment, in
all practical applications the number of bits need to be rounded up.

quality of these processing and provides sub-optimum results. Finally, most color difference encoding base their
decomposition on the luma channel. For all these reasons, our first experiment focus on the perceptual encoding
of only the luminance channel.
Figure 3 shows the result of an experiment in which observers determined the minimum number of bits
required for encoding luminance using four different PTFs (from Figure 2). The PTFs were designed to encode
the luminance ranging from 0.005 to 10,000 cd/m2 , and the measurements were made for luminance levels
ranging from 0.05 to 150 cd/m2 . Results show that the encoding using the logarithmic function (Log-linear)
requires different number of bits for dark (8.5 bits) and bright (10.5 bits) regions. The number of bits is more
steady for the perceptually uniform encoding based on the HDR-VDP-2 CSF model (PU-HDRVDP)14, 23 and
the Gamma-Log function16 , but the bit-depth is the most uniform across the luminance levels for the Perceptual
Quantizer (PQ)6, 24 . Note that the PQ was reported in6 to require less than 10 bits for natural images while
with our test patch, more than 10 bits are required. This results prove that our experimental setup is more
conservative than using natural images.
The results for the minimum bit-depth of luminance encoding show that for all PTFs considered, the lu-
minance from 0.005 to 10 000 cd/m2 can be encoded without perceptual loss using 11 bits or more. If only
the bit-depth is considered, then all PTFs are equivalent since fractional bit-depth cannot be implemented in
hardware. The logarithmic encoding offers the advantage for uncalibrated HDR images as the same relative
quantization error is introduced across the encoded lumiance range. The shortcomming of the logarithmic en-
coding is not only the poor perceptual uniformity, but also that it is likely to reveal more camera noise for darker
tones. The cameras have similar noise characteristic as the visual system and they suffer from higher noise at
low light levels. The stronger quantization at low luminance helps to reduce such noise. The other PTFs offer
better perceptual uniformity with the PQ encoding being the most uniform in the tested range.

3.3 Experiment 2: Color Difference Encoding


Color difference encoding, similar to an opponent color space attributed to the human vision, is favored for
video compression as it removes correlations between the luma and chroma channels. Indeed, achieving high
decorrelation ensures that no redundant information is encoded. For that reason we first investigate the amount
of decorrelation that the three color difference encodings presented in Section 2.1 can achieve. Table 3 reports the
Pearson product-moment correlation coefficient25 between the luma and chroma channels for 5 HDR images26 .
Results show that the YDu Dv achieves a higher decorrelation for three out of the five images. Furthermore,

6
Chroma Dz Cb Du Dx Cr Dv
Balloon 0.033 0.098 0.014 0.272 0.052 0.190
FireEater2 0.950 0.740 0.391 0.909 0.790 0.375
Market3 0.226 0.176 0.379 0.189 0.289 0.224
Seine 0.413 0.407 0.245 0.445 0.390 0.262
Tibul2 0.933 0.860 0.086 0.912 0.540 0.027
Table 3. Pearson product-moment correlation coefficient25 between the luma and the two chroma channels for each of the
three color difference encodings. Those encodings were performed on 12 bits.

Figure 4. Left: test patch with luminance gradient used for the first experiment. Right: test patch with three different
gradients: one along the luminance and two along the CIE u0 and v0 chrominances. The amplitude of the luminance
gradient increases along the vertical axis from -/+0.01 to -/+0.5 in the log domain. The chrominance gradient ranges
from -0.1 to 0.1 around the D65 white point.

for two images (FireEater2 and Tibul2), the YDz Dx and YCb Cr encoding have high correlation factor (close
to one). The consequence of these results is twofold: firstly, information will be compressed twice, one time
embedded in the luma channel and another in the chroma channel. Secondly, a higher bit-depth will most likely
be required to encode chroma channels as more information is present. We propose to assess the validity of our
second assumption by evaluating these color difference encodings with respect to the minimum bit-depth and
the perceptual uniformity in a second psychophysical experiment.
We chose to evaluate the color difference encoding schemes associated only with the PQ PTFs as it was
shown to be the most perceptually uniform. The test patch is built by combining three different gradients: one
along the luminance and two along the CIE u0 and v0 chrominances9 (Figure 4-right). The detection threshold
for contouring artefacts was measured separately for quantization of luma and two chroma channels. The
experimental procedure was similar to those used in Experiment 1.
The results of the second experiment are plotted in Figure 5. As expected, the results for luma quantization
shown on the left are similar to those shown in Figure 3. However, the YCb Cr encoding appears to require about
0.5 bit less of precision.
In case of chroma channel quantization (Figure 5-right), the difference in bit-depth precision between different
chroma encoding schemes is much larger. The YDu Dv encoding18 , based on the CIE u0 v0 chromatic coordinates,
requires the fewest bits to encode, especially at low luminance levels. Given that two chroma channels need to
be encoded, this can bring significant gains in compression efficiency. Note, however that the YDu Dv is also the
least perceptually uniform scheme and some adjustment in encoding may be required to address this problem.
Otherwise, chroma channels will be represented with higher precision than needed, which may cause encoding
invisible chroma differences at low luminance levels and lower coding performance.
The YCb Cr encoding requires 9 bits and YDz Dx requires at least 10 bits to encode. This makes them less
efficient than YDu Dv . Given that both YCb Cr and YDz Dx result in higher correlation with the luma channel,
YDu Dv encoding seems to be most efficient from those tested in terms of HDR pixel coding.
To summarize, encoding HDR color pixels can yield different results depending on the chosen color difference
representation. When the amount of information is the main priority, experimental results indicates that the

7
11 11

10 10

9 9
Minimum bit−depth

Minimum bit−depth
8 8

7 7

6 YDzDx 6 YDzDx
YDuDv YDuDv
5 YCbCr 5 YCbCr

0.05 0.5 5 50 0.05 0.5 5 50


Luminance [cd/m2] Luminance [cd/m2]
Figure 5. Minimum bit-required to encode color pixel using the three chroma encoding schemes: YDz Dx , YCb Cr and
YDu Dv . Left: Results for luma channel quantization. Right: Results for the quantization of two chroma channels. For
improved legibility the location of the error bars have been shifted along the x-axis.

YDu Dv coding requires only 8 bits to encode chrominance channel without visible distortions. If the applica-
tion requires perceptual uniformity, then the YCb Cr representation outperforms the other two representation
considered but requires at least 9 bits.

4. CONCLUSIONS
We presented an experimental evaluation of colour pixel encodings used to represent HDR pixel values. In
particular, we have shown than the PQ encoding provides good perceptual uniformity in the considered luminance
range, however the gain in bit-depth is rather moderate. In some applications, logarithmic encoding can prove
equally effective, especially since it does not required HDR values to be calibrated in absolute units. A more
significant difference can be observed between color difference encoding schemes, from which YDu Dv encoding18
seems to be the most efficient.
The insights from this study are not only limited to compression of HDR content. In fact, perceptually
uniform encoding of HDR luminance is a mandatory step for most image processing operations. To achieve high
perceptual uniformity, the PQ PTFs could be used as it offers the best uniformity from the PTFs tested in
this study. Although our experiments only tested low luminance values due to the limitations of our display,
the contrast sensitivity experiments show little change in sensitivity above 50 cd/m2 when the cones are fully
responsive. Consequently, results for higher luminance are expected to be close to ours results for 50 cd/m2 .

ACKNOWLEDGMENTS
This project was partially funded by the COST Action IC1005 on HDR video.

REFERENCES
[1] ITU, “Recommendation ITU-R BT.709-3: Parameter values for the HDTV standards for production and
international programme exchange.” International Telecommunications Union (1998).
[2] Reinhard, E., Heidrich, W., Debevec, P., Pattanaik, S., Ward, G., and Myszkowski, K., [High Dynamic
Range Imaging, 2nd Edition: Acquisition, Display, and Image-Based Lighting], Morgan Kaufmann (2010).
[3] Barlow, H. B., “Purkinje Shift and Retinal Noise,” Nature 179, 255–256 (Feb. 1957).
[4] Barten, P. G. J., “Physical model for the contrast sensitivity of the human eye,” in [SPIE 1666, Human
Vision, Visual Processing, and Digital Display III,], Rogowitz, B. E., ed., 57–72 (Aug. 1992).

8
[5] Tourapis, A. M., Su, Y., Singer, D., Foog, C., Van der Vleuten, R., and Francois, E., “Report on the
XYZ/HDR Exploratory Experiment 1 (EE1): Electro-Optical Transfer Functions for XYZ/HDR Delivery,”
in [ISO/IEC JTC1/SC29/WG11 MPEG2014/M34165 ], IEEE, ed. (2014).
[6] Miller, S., Nezamabadi, M., and Daly, S., “Perceptual Signal Coding for More Efficient Usage of Bit Codes,”
SMPTE Motion Imaging Journal 122, 52–59 (May 2013).
[7] Charles Poynton, [Digital Video and HD: Algorithms and Interfaces], Elsevier Science, the morgan ed.
(2012).
[8] ITU, “Recommendation ITU-R BT.2020: Parameter values for ultra-high definition television systems for
production and international programme exchange.” International Telecommunications Union (2012).
[9] Smith, T. and Guild, J., “The c.i.e. colorimetric standards and their use,” Transactions of the Optical
Society 33(3), 73 (1931).
[10] Mantiuk, R., Krawczyk, G., Myszkowski, K., and Seidel, H.-P., “Perception-motivated high dynamic range
video encoding,” ACM Transactions on Graphics (Proc. of SIGGRAPH) 23, 733 (Aug. 2004).
[11] Mantiuk, R., Myszkowski, K., and Seidel, H.-p., “Lossy Compression of High Dynamic Range Images and
Video,” in [SPIE Proceedings Vol. 6057: Human Vision and Electronic Imaging XI], Rogowitz, B. E.,
Pappas, T. N., and Daly, S. J., eds., 60570V–60570V–10 (Feb. 2006).
[12] Luthra, A., Francois, E., and Husak, W., “Draft Requirements and Explorations for HDR / WCG Content
Distribution and Storage,” in [ISO/IEC JTC1/SC29/WG11 MPEG2014/N14510 ], IEEE, Valencia, Spain
(2014).
[13] Society of Motion Picture & Television Engineers, “High Dynamic Range Electro- Optical Transfer Function
of Mastering Reference Displays Table,” in [SMPTE ST 2084], 1–14, SMPTE (2014).
[14] Mantiuk, R., Kim, K. J., Rempel, A. G., and Heidrich, W., “HDR-VDP-2: a calibrated visual metric for
visibility and quality predictions in all luminance conditions,” ACM Trans. Graph (Proc. SIGGRAPH) 30,
1 (July 2011).
[15] Mantiuk, R., Myszkowski, K., and Seidel, H.-P., “Lossy compression of high dynamic range images and
video,” in [Proc. of Human Vision and Electronic Imaging XI ], Proceedings of SPIE 6057, 60570V, SPIE,
San Jose, USA (February 2006).
[16] Touzé, D., Lasserre, S., Olivier, Y., Boitard, R., and François, E., “HDR Video Coding based on Local LDR
Quantization,” in [HDRi2014 - Second International Conference and SME Workshop on HDR imaging
(2014), ], (2014).
[17] SMPTE, “YDzDx Color-Difference Encoding for XYZ Signals.” Society of Motion Picture and Television
Engineers SMPTE ST 2085 (2014).
[18] Ward Larson, G., “LogLuv encoding for full-gamut, high-dynamic range images,” Journal of Graphics
Tools 3(1), 815–30 (1998).
[19] ITU, “Recommendation ITU-R BT.1361: Worldwide unified colorimetry and related characteristics of future
television and imaging systems.” International Telecommunications Union (2002).
[20] Poynton, C. A., [A Technical Introduction to Digital Video ] (1996).
[21] Bi, J., Lee, H.-S., and O’Mahony, M., “d and Variance of d for Four-Alternative Forced Choice (4-AFC),”
Journal of Sensory Studies 25, 740–750 (Oct. 2010).
[22] Watson, A. and Pelli, D., “QUEST: A Bayesian adaptive psychometric method,” Perception & Psy-
chophysics 33(2), 113–120 (1983).
[23] Kim, K. J., Mantiuk, R., and Lee, K. H., “Measurements of achromatic and chromatic contrast sensitivity
functions for an extended range of adaptation luminance,” in [Human Vision and Electronic Imaging ],
86511A (Mar. 2013).
[24] Kunkel, T., Ward, G., Lee, B., and Daly, S., “HDR and wide gamut appearance-based color encoding and
its quantification,” in [2013 Picture Coding Symposium (PCS)], 357–360, IEEE, San Jose (Dec. 2013).
[25] Pearson, K., “Note on Regression and Inheritance in the Case of Two Parents,” Royal Society of London
Proceedings 58(1), 240–242 (1895).
[26] Lasserre, S., LeLéannec, F., and Francois, E., “Description of HDR sequences proposed by Technicolor,” in
[ISO/IEC JTC1/SC29/WG11 JCTVC-P0228], IEEE, San Jose, USA (2013).

You might also like