Improving Discrimination in Co
Improving Discrimination in Co
Article
Improving Discrimination in Color Vision Deficiency
by Image Re-Coloring
Huei-Yung Lin 1, *, Li-Qi Chen 2 and Min-Liang Wang 3
1 Department of Electrical Engineering, Advanced Institute of Manufacturing with High-Tech Innovation,
National Chung Cheng University, Chiayi 621, Taiwan
2 Department of Electrical Engineering, National Chung Cheng University, Chiayi 621, Taiwan;
[email protected]
3 Asian Institute of TeleSurgery/IRCAD-Taiwan, Changhua 505, Taiwan; [email protected]
* Correspondence: [email protected]; Tel.: +886-5-272-0411
Received: 19 April 2019 ; Accepted: 13 May 2019; Published: 15 May 2019
Abstract: People with color vision deficiency (CVD) cannot observe the colorful world due to the
damage of color reception nerves. In this work, we present an image enhancement approach to
assist colorblind people to identify the colors they are not able to distinguish naturally. An image
re-coloring algorithm based on eigenvector processing is proposed for robust color separation under
color deficiency transformation. It is shown that the eigenvector of color vision deficiency is distorted
by an angle in the λ, Y-B, R-G color space. The experimental results show that our approach is useful
for the recognition and separation of the CVD confusing colors in natural scene images. Compared to
the existing techniques, our results of natural images with CVD simulation work very well in terms
of RMS, HDR-VDP-2 and an IRB-approved human test. Both the objective comparison with previous
works and the subjective evaluation on human tests validate the effectiveness of the proposed method.
1. Introduction
Most human beings have the ability of color vision perception, which senses the frequency of
the light reflected from object surfaces. However, color vision deficiency (CVD) is a common genetic
condition [1]. It is in general not a fatal or serious disease, but still brings inconvenience to most
patients. People with color vision deficiency (or so-called color blindness) cannot observe the colorful
world due to the damage of color reception nerves. Whether caused by genetic problems or chemical
injury, the damaged nerves are not able to distinguish certain colors. There are a few common types of
color vision deficiency such as protanomaly (red weak), deuteranomaly (green weak) and tritanomaly
(blue weak). They can be detected and verified easily by some special color patterns (e.g., Ishihara
plates [2]), but, unfortunately, cannot be cured by medical surgery or other treatments. Compared to
the human population, people with color vision deficiency are still a minority, and they are sometimes
ignored and restricted by our society.
In many places, colorblind people are not allowed to have a driver’s license. A number of
careers in engineering, medicine and other related fields have set some restrictions on the ability of
color perception. The display and presentation of most media on devices and in many forms do not
specifically take color vision deficiency into consideration. Although the weakness in distinguishing
different colors does not obviously affect people’s learning and cognition, there is still a challenge in
terms of color-related industries. In this work, we propose an approach to assist people with color
vision deficiency to tell the difference among the confusing colors as much as possible. A simple
yet reasonable technique, “color reprint”, is developed and used to represent the CVD-proof colors.
The algorithm does not only preserve the naturalness and details of the scenes, but also possess
real-time processing capability. It can, therefore, be implemented on low-cost or portable devices,
and brought to everyday life.
Human color vision is based on three light-sensitive pigments [3,4]. It is trichromatic and
presented in three dimensions. The color stimulus is specified by the power contained at each
wavelength. Normal trichromacy is because that the retina contains three classes of cone photo-pigment
neural cells, L-, M-, and S-cones. A range of wavelengths of the light stimulate each of these receptor
types at various degrees. For example, yellowish green light stimulates both L- and M-cones equally
strongly, but S-cones weakly. Red light stimulates more L-cones than M-cones, and S-cones hardly
at all. Our brain combines the information from each type of cone cells, and responds to different
wavelengths of the light as shown in Table 1. The color processing is carried out in two stages. First,
the stimulus from the cones is recombined to form two color-opponents and luminance. Second, an
adaptive signal regulation processes within the operating range and stabilizes the illumination changes
of the object appearance. When any kind of sensitive pigments is broken or loses the functionality [1],
people can only view a part of the visible spectrum compared to those with normal vision capability [5]
(see Figure 1).
(a) One of the images in Ishihara plates (left), and the images enhanced by the proposed re-coloring algorithm for protanomaly,
deuteranomaly and tritanomaly, respectively (the rest).
(b) The images in (a) generated by color vision deficiency simulation. The first image is deuteranomaly simulation of the original
Ishihara plate. The rest images are the simulation results of protanomaly, deuteranomaly and tritanomaly on the re-colored
images, respectively.
Figure 1. (a) An original image from Ishihara plates and the enhanced images using our re-coloring
algorithms for protanomaly, deuteranomaly and tritanomaly. (b) The images generated from a color
vision deficiency simulation tool [6]. The results show that our image enhancement technique is able to
improve check pattern recognition under various types of color vision deficiency.
Table 1. Cone cells in the human eyes and the response to the light wavelength.
There are studies about the molecular genetics of human color vision in the literature. Nathans
et al. have described the isolation and sequencing of genomic and complementary DNA clones
which encode the apoproteins of the red, green and blue pigments [4]. With newly refined methods,
the number and ratio of genes are re-examined in men with normal color vision. A recent report
reveals that many males have more pigment genes on the X chromosome than previously studied,
and many have more than one long-wave pigment gene [7]. The loss of characteristic sensitivities
of the red and green receptors introduced into the transformed sensitivity curves also indicates the
appropriate degrees of luminosity deficit for deuteranopes and protanopes [8].
Color vision deficiency is mainly caused by two reasons: natural genetic factors and impaired
nerves or brain. A protanope suffers from a lack of the L-cone photo-pigment, and is unable
to discriminate reddish and greenish hues since the red–green opponent mechanism cannot be
constructed. A deuteranope does not have sufficient M-cone photo-pigment, so the reddish and
greenish hues are not distinguishable. People with tritanopia do not have the S-cone photo-pigment,
and, therefore, cannot discriminate yellowish and bluish hues [9]. The literature shows that more than
8% of the world population suffer from color vision deficiency (see Table 2). For color vision correction,
gene therapy which adds the missing genes is sufficient to restore full color vision without further
rewiring of the brain. It has been tested on a monkey with colorblindness since birth [10]. Nevertheless,
there are also non-invasive alternatives available by means of computer vision techniques.
Table 2. Approximate percentage occurrences of various types of color vision deficiency [11].
In [12], Huang et al. propose a fast re-coloring technique to improve the accessibility for the
impaired color vision. They design a method to derive an optimal mapping to maintain the contrast
between each pair of the representative colors [13]. In a subsequent work, an image re-coloring
algorithm for dichromats using the concept of key color priority is presented [14]. A color blindness
plate (CBP) is presented by Chen et al., which is a satisfactory way to test color vision in the computer
vision community [15]. The approach is adopted to demonstrate normal color vision, as well as
red–green color vision deficiency. Rasche et al. propose a method to preserve the image details
while reducing the gamut dimension, and seek a color to gray mapping to maintain the contrast and
luminance consistency [16]. They also describe a method which allows the re-colored images to deliver
the content with increased information to color-deficient viewers [17]. In [18], Lau et al. present a
cluster-based approach to optimize the transformation for individual images. The idea is to preserve
the information from the source space as much as possible while maintaining the natural mapping
as faithfully as possible. Lee et al. develop a technique based on fuzzy logic and correction of digital
images to improve the visual quality for individuals with color vision disturbance [19]. Similarly, Poret
et al. design a filter based on the Ishihara color test for color blindness correction [20].
Most algorithms for color transformation aim to preserve the color information in the original
image while maintaining the re-colored image as naturally as possible. This might be different from
some image processing and computer vision tasks; the images appearing natural after enhancement
is an important issue for color vision deficiency correction. It is not only to keep the image details
intact, but also to maintain the colors as smooth as those without the re-coloring process. These
Sensors 2019, 19, 2250 4 of 19
conditions re-range in the color distribution space to let the colorblind people to discriminate different
colors [21,22]. Moreover, it is generally agreed that color perception is subjective and will not be
exactly the same for different people. In this work, the proposed method is carried out on color vision
deficiency simulation tools and adopts human tests for evaluation. We use RMS (root mean squares)
to calculate the change after re-coloring, and HDR-VDP (visual difference predictor) [23] to compare
the visibility and quality of subjective human feeling. Our algorithms not only present the naturalness
and details of the images, but process almost in real-time.
2. Approach
In this paper, a technique called color warping (CW) is proposed for effective image re-coloring.
It uses the orientation of the eigenvectors of the color vision deficiency simulation results to warp
the color distribution. In general, the acquired images are presented in the RGB color space for
display. This is, however, not suitable for color vision-related processing. For human color perception
related tasks, the images are first transformed to the λ, Y-B, R-G color space based on the CIECAM02
model [24]. It consists of a transformation from RGB to LMS [25] using
L 0.7328 0.4296 −0.1624 R
M = −0.7036 1.6975 (1)
G
0.0061
S 0.0030 0.0136 0.9834 B
Since the above transformations are linear, it is easily to verify the relationship between the RGB
and λ, Y-B, R-G color spaces is given by
λ 0.3479 0.5981 −0.3657 R
Y − B = −0.0074 −0.1130 −1.1858 (3)
G
R−G 1.1851 −1.5708 0.3838 B
and
R 1.2256 −0.2217 0.4826 λ
G = 0.9018
−0.3645 −0.2670 Y − B
. (4)
B −0.0936 −0.8072 0.0224 R−G
A flowchart of the proposed method is illustrated in Figure 2. The “Eigen-Pro” stage represents
the eigenvector processing. The color warping is the key idea of this work, and the color constraints are
used to make the distortion decrease after the color space transformation.
Sensors 2019, 19, 2250 5 of 19
Figure 2. The flowchart of the proposed technique. In the pipeline, the images are first transformed to
the λ, Y-B, R-G color space for the re-color processing, followed by a transformation back to the original
RGB color space.
where φR,G,B is the spectral power distribution function, and ρ R,G,B is a normalization factor. Thus,
Γ is the projection of the spectral power distributions of RGB primaries onto a set of basic functions
f (λ, R, G, B)WS,YB,RG . That is,
f ( R)WS f ( G )WS f ( B)WS
Γ = f ( R)YB f ( G )YB f ( B)YB . (6)
f ( R) RG f ( G ) RG f ( B) RG
(a) (b)
Figure 3. The cone spectral sensitivity functions at all wavelengths in the visible range. (a) Responding
curve [36]. (b) Spectral response functions for the opponent channels [34].
This model is based on the stage theory of human color vision, and is derived from the
data reported in electro-physiological study [34]. Let ΦCVD be the matrix that maps RGB to the
opponent-color space of normal trichromacy, then the simulation of dichromatic vision is obtained by
the transformation
Rs R
Gs = ΦCVD G . (7)
Bs B
By definition, an eigenvector is the non-zero vector mapped by a given linear transformation
of a vector space onto a vector that is the product of the original vector multiplied by a scalar. Thus,
the algorithm counts the eigenvectors of the covariance matrix from the images in Y-B, R-G of the λ,
Y-B, R-G opponent color space, i.e.,
where eig is the function of eigenvalue and eigenvector, and IY − B and IR−G are the Y-B and R-G images,
respectively. On the left hand side of the equation, d is the generalized eigenvalue, and v is a 2 × 2
matrix since the covariance cov is a 2 × 2 matrix derived from a pair of n × 1 images given by the
covariance matrix
∑n ( Xi − X̄ )(Yi − Ȳ )
cov( X, Y ) = i=1 (9)
( n − 1)
For the original and CVD simulation images shown in Figure 4, the characteristics of the associated
eigenvectors are illustrated in Figure 5. The black line (at about 91◦ ) indicates the eigenvector of
the original image. For protanopia (red line about 150◦ ) and deuteranopia (green line about 140◦ ),
the eigenvectors lead the one associated with the original image. The eigenvector of tritanopia (blue
line at about 80◦ ) is behind the original image case. Our objective is to recover the angle difference
Sensors 2019, 19, 2250 7 of 19
between the normal and color vision deficiency images, and use it to re-color the image. The difference
image when observed by normal viewers and the color vision deficiency simulation is defined by
q
Idi f f = ( In (YB) − Ic (YB))2 + ( In ( RG ) − Ic ( RG ))2 (10)
where In and Ic represent the intensity observed by a normal viewer and obtained from the color vision
deficiency simulation, respectively.
Figure 4. The three types of color vision deficiency simulation using Machado’s approach [35] with
sensitive 0.6 and the matrix ΦCVD as shown in Table 3.
Sensitivity 0.6
0.385 0.769 0.154
Protanopia 0.101 0.830 0.070
0.007 0.022 1.030
0.499 0.675 0.174
Deuteranopia 0.205 0.755 0.040
0.011 0.031 0.980
1.105 0.047 0.058
Tritanopia 0.032 0.972 0.061
0.001 0.318 0.681
Sensors 2019, 19, 2250 8 of 19
0.8
0.6
0.4
R−G
0.2
−0.2 Original
Tritanopia
Deuteranopia
Protanopia
−0.4
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6
Y−B
(a) Original (b) Protanopia simulation (c) Difference image (d) Binary image
Figure 6. An example of Protanopia simulation. (a) is the original image and (b) is the Protanopia
simulation result. (c) is the difference of (a,b) computed in the λ, Y-B, R-G color space. (d) is the
binarized version of (c) for better illustration.
for processing. The angle θ associated with the eigenvector in the λ, Y-B, R-G color space is used to
derive the range to be processed. Since the image is now in the opponent color space, the range is
defined by the angle of the simulation vector to the opposite angle of the simulation vector. Finally,
the warping range is defined by the vertical angle of the original vector to the opposite angle of the
simulation vector. An example is illustrated in Figure 8, the green area is warped to the red area for
image re-coloring.
Figure 8. An illustration of the color warping range from the green area to the red area.
Sensors 2019, 19, 2250 10 of 19
The new color angle is derived from the original color angle by
θ⊥ − θop
θnew = · (θ − θop ) (12)
π
where the angles of color points are defined in the range of [−π, π ], θ⊥ is the angle of vector orthogonal
to the original vector, and θop is the angle of vector opposite to the color vision deficiency simulation
vector.
When the image is converted from RGB to the λ, Y-B, R-G color space, it is in a limited range of
color space representation. We need a constraint to avoid the luminance overflow problem, which will
make colors not smooth after converted back to the RGB color representation. In our approach, a convex
hull is adopted for the color constraint due to its simplicity for boundary derivation. Figure 9a–d
illustrate the full-color images constructed using 2563 pixels, i.e., the resolution of 4096 × 4096, and the
corresponding convex hull is shown in Figure 9e (the red lines). The formula used for conversion is
given by
ρ(θnew )
ρnew = ρ × (13)
ρ(θ )
where ρ is the original value, ρ(θnew ) is the value of the convex hull at θnew , and ρ(θ ) is the value of the
convex hull at θ. The resulting image in the λ, Y-B, R-G color space is then transformed back to the
RGB color space for display.
Figure 9. (a) Three types of color vision deficiency simulation using a full-color image with 2563 pixels.
The image resolution is 4096 × 4096. (e) The convex hulls of the images in (a–d). All types of CVD
simulation cover only a part of the convex hull of the original full color imag.
Sensors 2019, 19, 2250 11 of 19
3. Experiments
The proposed method has been tested on natural images including flowers, fruits, pedestrians
and landscape, as well as synthetic images such as patterns with pure colors (see Figure 10).
The experiments were carried out on both simulation view and human tests. Figure 11 shows the
images of protanopia color vision deficiency with different sensitive from 0.3 to 0.9 after our re-coloring
technique. For the color vision deficiency view simulation, we compared the results of the proposed
approach with the methods presented by Kuhn et al. [37], Rasche et al. [17] and Huang et al. [12].
Figure 12 shows the results of the deuteranopia color vision deficiency simulation and re-coloring
using different algorithms. While all methods are able to separate the flower from the leaves, our result
is more distinguishable and much closer to original color.
(g) Pedestrian 2
Figure 10. The test images used to evaluate the re-coloring techniques for color vision deficiency.
Sensors 2019, 19, 2250 12 of 19
Figure 11. Enhancement sensitive of protanopia color vision deficiency, (a) with sensitivity 0.3, (b) with
sensitivity 0.5, (c) with sensitivity 0.7, (d) with sensitivity 0.9.
Figure 12. The comparison of deuteranopia simulation of the flower image in Figure 9a. (a) Machado’s
CVD simulation. (b) Our re-coloring technique after Machado’s CVD simulation. (c) Brettel’s CVD
simulation. (d) Kuhn’s re-coloring technique after Brettel’s CVD simulation. (e) Rasche’s CVD
simulation. (f) Rasche’s re-coloring after CVD simulation. (g) Huang’s CVD simulation. (h) Huang’s
re-coloring after CVD simulation.
where ari+ j and bir+ j are a∗ b∗ in L∗ a∗ b∗ of the reference image, ait+ j and bit+ j are a∗ b∗ in L∗ a∗ b∗ of the
target image, and N is the number of elements in k-neighbor.
An example of tritanopia CVD simulation and the re-coloring results is shown in Figure 13.
Compared to the results obtained from Kuhn’s and Huang’s methods, our approach provides better
Sensors 2019, 19, 2250 13 of 19
contrast between the colors. Figure 14 shows the comparison of the RMS values on several test images
using the proposed technique and Kuhn’s method. The higher RMS value is displayed in dark blue,
and the lowest value is shown in white. The figures indicate that, although the distributions of our
and Kuhn’s results are similar, the RMS values of ours are higher than Kuhn’s, which implies a
better separation in colors. Additional results of various types of test images are shown in Figure 15.
The results of CVD simulation, re-coloring using the proposed technique and CVD simulation on the
re-colored images are shown in the first, second and third column, respectively.
Figure 13. The comparison of tritanopia simulation of the pencil image. (a) The original image. (b) The
CVD simulation using Machado’s method. (c) Machado’s CVD simulation on the image processed
by the proposed re-coloring technique. (d) The CVD simulation using Brettel’s method. (e) Brettel’s
CVD simulation on the image processed by the Kuhn’s re-coloring technique. (f,g) CVD simulation
and re-coloring using Huang’s approach.
Figure 14. The comparison of RMS values between our method and Kuhn’s method. (a) The RMS
value between Figures 6a and 12a. (b) The RMS value between Figure 12b and 12a. (c) The RMS value
between Figures 6a and 12c. (d) The RMS value between Figure 12d and 12c. (e) The RMS value
between Figure 13a and 13b. (f) The RMS value between Figure 13c and 13b. (g) The RMS value
between Figure 13a and 13d. (h) The RMS value between Figure 13e and 13d.
Sensors 2019, 19, 2250 14 of 19
Figure 15. The results of CVD simulation, re-coloring using the proposed technique and CVD
simulation on the re-colored images for some test images in Figure 10 (the first two columns) and 15
(the third column).
Sensors 2019, 19, 2250 15 of 19
• M1 : The input image is converted to the L∗ u∗ v∗ color space, projected to u∗ v∗ and equalized the
u∗ and v∗ coordinates.
• M2 : The input image is used to simulate the CVD view, and find the (R, G, B) difference between
input and simulation images. A matrix is then used to enhance the color difference regions.
• M3 : The input image is converted to the L∗ u∗ v∗ color space, and rotated to the non-confused
color position.
• M4 : The input image is used to simulate the CVD view, and the distances among the colors are
used to obtain the discrepancy. The image is then converted to the λ, Y-B, R-G color space, and
rotated the color difference regions.
We collected 55 valid subjects in the test. The results are tabulated in Table 4. In the table, i is the
method of different research stages, j is the index of the test image, the letters are the feeling level of
the pros and cons (denoted by A, B, C, D) for the subjects. The summary indicates the proportion of the
method Mi for the test image Fj chosen by subjects is over 1/3. As shown in Table 4, 83.64% (marked
in blue) of 55 subjects selected level A for the method M2 and the test image F1 . The numbers marked
in red indicate the proportion of method Mi in levels A, B, C, D with higher percentages, and the
associated methods are more representative in the level. Thus, each level (A, B, C, D) is represented by
the methods: M2 , M4 , M3 , and M1 . It also shows that the best to the worst for color vision deficiency
feeling of the four different methods are given by M2 , M4 , M3 , M1 .
Sensors 2019, 19, 2250 16 of 19
Figure 16. The comparison of CVD simulation results processed using our re-coloring technique and
Kuhn’s method. The first and fourth rows are two test images and their CVD simulation results.
The second and fifth rows are the visualized RMS values, and HDR-VDP evaluation is shown in the
third and sixth rows.
Sensors 2019, 19, 2250 17 of 19
Table 4. The human test on 55 valid subjects with four different methods. The number are shown in
percentage. The numbers marked in red indicate the proportion of method Mi in levels A, B, C, D with
higher percentages, and the associated methods are more representative in the level.
Level A B
M1 M2 M3 M4 M1 M2 M3 M4
F1 5.45 83.64 5.45 5.45 45.45 9.09 9.09 36.36
F2 5.45 90.91 0.00 3.64 3.64 5.45 21.82 69.09
F3 1.82 72.73 14.55 10.90 20.00 3.64 25.45 50.91
F4 1.82 94.55 1.82 1.82 9.09 1.82 25.45 63.64
e F5 1.89 91.67 0.00 8.33 0.00 5.45 20.00 74.55
F6 0.00 41.82 12.73 45.45 5.36 8.93 44.64 41.07
F7 49.09 9.09 3.64 38.18 16.36 27.27 30.91 25.45
F8 14.81 20.37 16.67 48.15 12.37 9.09 50.91 27.27
Summary 7.14 64.29 3.57 25.00 11.11 3.70 33.33 51.85
Level C D
M1 M2 M3 M4 M1 M2 M3 M4
F1 9.09 5.45 54.55 30.91
F2 0.00 0.00 76.36 23.64 90.91 3.64 3.64 1.82
F3 9.09 14.55 43.64 32.73 69.09 9.09 16.36 5.45
F4 18.18 0.00 49.09 32.73 70.91 3.64 23.64 1.82
F5 14.55 1.82 63.64 20.00 88.89 0.00 7.41 3.70
F6 21.82 50.91 14.55 12.73 75.47 0.00 24.53 0.00
F7 16.36 30.91 32.73 20.00 18.18 32.73 32.73 16.36
F8 41.82 16.00 32.00 20.00 31.48 51.85 5.56 11.11
Summary 9.68 12.90 58.06 19.35 76.00 12.00 8.00 4.00
4. Conclusions
In this paper, we present an image enhancement approach to assist colorblind people with a
better viewing experience. An image re-coloring method based on eigenvector processing is proposed
for robust color separation under color deficiency transformation. It is shown that the eigenvector of
color vision deficiency is distorted by an angle in the λ, Y-B, R-G color space. The proposed method
represents clearly subjective image quality and the objective evaluation. Compared to the existing
techniques, our results of natural images with CVD simulation work very well in terms of RMS,
HDR-VDP-2 and IRB-approved human test. Both the objective comparison with previous works and
the subjective evaluation on human tests validate the effectiveness of the proposed technique.
Author Contributions: H.-Y.L. proposed the idea, formulated the model, conducted the research and wrote the
paper. L.-Q.C. developed the software programs, performed experiments and data analysis, and wrote the paper.
M.-L.W. helped with the human test experiments.
Funding: The support of this work is in part by the Ministry of Science and Technology of Taiwan under Grant
MOST 106-2221-E-194-004 and the Advanced Institute of Manufacturing with High-tech Innovations (AIM-HI)
from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project
by the Ministry of Education (MOE) in Taiwan.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Wong, B. Points of view: Color blindness. Nat. Methods 2011, 8, 441. [CrossRef] [PubMed]
2. Ishihara, S. Ishihara’s Tests for Color-Blindness, 38th ed.; Kanehara, Shuppan: Tokyo, Japan, 1990.
3. Hunt, R. Colour Standards and Calculations. In The Reproduction of Colour; John Wiley and Sons, Ltd.:
Hoboken, NJ, USA, 2005; pp. 92–125. [CrossRef]
4. Nathans, J.; Thomas, D.; Hogness, D.S. Molecular genetics of human color vision: The genes encoding blue,
green, and red pigments. Science 1986, 232, 193–202. [CrossRef]
Sensors 2019, 19, 2250 18 of 19
5. Michael, K.; Charles, L. Psychophysics of Vision: The Perception of Color. Available online: https://www.
ncbi.nlm.nih.gov/books/NBK11538/ (accessed on 30 April 2019).
6. Colblindor Web Site. Available online: https://www.color-blindness.com/category/tools/ (accessed on 30
April 2019).
7. Neitz, M.; Neitz, J. Numbers and ratios of visual pigment genes for normal red-green color vision. Science
1995, 267, 1013–1016. [CrossRef]
8. Graham, C.; Hsia, Y. Color Defect and Color Theory Studies of normal and color-blind persons, including a
subject color-blind in one eye but not in the other. Science 1958, 127, 675–682. [CrossRef]
9. Fairchild, M. Color Appearance Models; The Wiley-IS&T Series in Imaging Science and Technology; Wiley:
London, UK, 2013.
10. Dolgin, E. Colour blindness corrected by gene therapy. Nature 2009, 2, 66–69. [CrossRef]
11. Hunt, R.W.G.; Pointer, M.R. Measuring Colour; John Wiley & Sons: Hoboken, NJ, USA, 2011.
12. Huang, J.B.; Wu, S.Y.; Chen, C.S. Enhancing Color Representation for the Color Vision Impaired.
In Proceedings of the Workshop on Computer Vision Applications for the Visually Impaired, Marseille,
France, 12–18 October 2008.
13. Huang, J.B.; Chen, C.S.; Jen, T.C.; Wang, S.J. Image recolorization for the colorblind. In Proceedings of the
IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009;
pp. 1161–1164.
14. Huang, C.R.; Chiu, K.C.; Chen, C.S. Key Color Priority Based Image Recoloring for Dichromats. In Advances
in Multimedia Information Processing, Proceedings of the 11th Pacific Rim Conference on Multimedia, Shanghai,
China, 21–24 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 637–647. [CrossRef]
15. Chen, Y.S.; Hsu, Y.C. Computer vision on a colour blindness plate. Image Vis. Comput. 1995, 13, 463–478.
[CrossRef]
16. Rasche, K.; Geist, R.; Westall, J. Re-coloring Images for Gamuts of Lower Dimension. Comput. Graph. Forum
2005, 24, 423–432. [CrossRef]
17. Rasche, K.; Geist, R.; Westall, J. Detail preserving reproduction of color images for monochromats and
dichromats. IEEE Comput. Graph. Appl. 2005, 25, 22–30. [CrossRef]
18. Lau, C.; Heidrich, W.; Mantiuk, R. Cluster-based color space optimizations. In Proceedings of the 2011 IEEE
International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 1172–1179.
19. Lee, J.; Santos, W. An adaptative fuzzy-based system to evaluate color blindness. In Proceedings of the 17th
International Conference on Systems, Signals and Image Processing (IWSSIP 2010), Rio de Janeiro, Brazil,
17–19 June 2010.
20. Poret, S.; Dony, R.; Gregori, S. Image processing for colour blindness correction. In Proceedings of the
2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), Toronto, ON,
Canada, 26–27 September 2009; pp. 539–544.
21. CIE Web Site. Available online: http://cie.co.at/ (accessed on 30 April 2019).
22. Wright, W.D. Color Science, Concepts and Methods. Quantitative Data and Formulas. Phys. Bull. 1967,
18, 353. [CrossRef]
23. Mantiuk, R.; Kim, K.J.; Rempel, A.G.; Heidrich, W. HDR-VDP-2: A Calibrated Visual Metric for Visibility
and Quality Predictions in All Luminance Conditions. ACM Trans. Graph. 2011, 30, 40:1–40:14. [CrossRef]
24. Moroney, N.; Fairchild, M.D.; Hunt, R.W.; Li, C.; Luo, M.R.; Newman, T. The CIECAM02 Color Appearance
Model. Color Imaging Conf. 2002, 2002, 23–27.
25. Brettel, H.; Viénot, F.; Mollon, J.D. Computerized simulation of color appearance for dichromats. J. Opt. Soc.
Am. A 1997, 14, 2647–2655. [CrossRef]
26. Wild, F. Outline of a Computational Theory of Human Vision. In Proceedings of the KI 2005 Workshop
7 Mixed-Reality as a Challenge to Image Understanding and Artificial Intelligence, Koblenz, Germany,
11 September 2005; p. 55.
27. Busin, L.; Vandenbroucke, N.; Macaire, L. Color spaces and image segmentation. Adv. Imaging Electron Phys.
2008, 151, 65–168.
28. Vrhel, M.; Saber, E.; Trussell, H. Color image generation and display technologies. IEEE Signal Process. Mag.
2005, 22, 23–33. [CrossRef]
29. Sharma, G.; Trussell, H. Digital color imaging. IEEE Trans. Image Process. 1997, 6, 901–932. [CrossRef]
[PubMed]
Sensors 2019, 19, 2250 19 of 19
30. Marguier, J.; Süsstrunk, S. Color matching functions for a perceptually uniform RGB space. In Proceedings
of the ISCC/CIE Expert Symposium, Ottawa, ON, Canada, 16–17 May 2006.
31. Huang, J.B.; Tseng, Y.C.; Wu, S.I.; Wang, S.J. Information preserving color transformation for protanopia and
deuteranopia. IEEE Signal Process. Lett. 2007, 14, 711–714. [CrossRef]
32. Ballard, D.H.; Brown, C.M. Computer Vision; Prentice Hall: Upper Saddle River, NJ, USA, 1982.
33. Swain, M.J.; Ballard, D.H. Color indexing. Int. J. Comput. Vis. 1991, 7, 11–32. [CrossRef]
34. Ingling, C.R.; Tsou, B.H.P. Orthogonal combination of the three visual channels. Vis. Res. 1977, 17, 1075–1082.
[CrossRef]
35. Machado, G.M.; Oliveira, M.M.; Fernandes, L.A. A physiologically-based model for simulation of color
vision deficiency. IEEE Trans. Vis. Comput. Graph. 2009, 15, 1291–1298. [CrossRef]
36. Smith, V.C.; Pokorny, J. Spectral sensitivity of the foveal cone photopigments between 400 and 500 nm.
Vis. Res. 1975, 15, 161–171. [CrossRef]
37. Kuhn, G.R.; Oliveira, M.M.; Fernandes, L.A. An efficient naturalness-preserving image-recoloring method
for dichromats. IEEE Trans. Vis. Comput. Graph. 2008, 14, 1747–1754. [CrossRef] [PubMed]
38. Wikipedia. Institutional Review Board—Wikipedia. The Free Encyclopedia. Available online: http:
//en.wikipedia.org/wiki/Institutional_review_board (accessed on 1 July 2013).
c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
© 2019. This work is licensed under
https://creativecommons.org/licenses/by/4.0/ (the “License”).
Notwithstanding the ProQuest Terms and Conditions, you may use this
content in accordance with the terms of the License.