Color Science



Our work in different issues of color science has been motivated by the development of Colorlab (a comprehensive color processing toolbox for Matlab). Specifically, the computational implementation of the standard tristimulus colorimetry and the advanced color appearance models as described in Faichild's excellent book [Color Appearance Models, Addison Wesley 98] has given us a deep understanding of color vision problems that allowed us to give alternative solutions in Color Constancy, Color Perception in Dichromats, Physiological Color Representation Spaces, or CRT calibration procedures.

Color Constancy [Martínez96, Martínez97]

The human visual system assigns constant chromatic attributes to objects in spite of the alterations in spectral distribution of the illuminant or the spatio-chromatic arrangement of the visual field. This visual capacity is called colour constancy. An example of this is shown below: 

Artificial vision systems that operate by using the (linear) tristimulus colorimetry do not have this ability, so they give rise to significant chromatic errors when operating outdoors.  As an example, figure (a) shows the change of the chromatic coordinates of a set of Munsell samples when changing the illuminant (from CIE C to CIE A).

According to the retinex theory, in this work we have implemented different versions of algorithms for colour constancy [Martínez96, Martínez97]. We proposed new non-linear colour descriptors for a stimulus, practically invariant to the spectral change of the illuminant. Not surprisingly, the proposed descriptors are based in normalization of the tristimulus values by the tristimulus values of the surroundings as in other color appearance models [Fairchild98] or in contrast divisive normalization models [Simoncelli01]. Finally, we check experimentally the statistical stability of these descriptors corresponding to standard colour samples against a range of standard illuminants as well as their validity rank. Figure (b) shows that the proposed descriptors are far more stable than chromatic coordinates under a variety of different illuminants.

(a) (b)

Colorlab and the corresponding pair paradigm (see below the application to predict dichromatic perception) allows the interested reader to play with other (more standard) color appearance models to obtain color descriptions with color constancy.

Simulating Dichromatic Perception: the corresponding pair procedure. [Capilla04]

In the literature of color appearance [Fairchild98], two scenes are referred as to be corresponding scenes if (despite their physical differences, e.g. different illumination) they give rise to the same color perception. Invertible color appearance models that incorporate surround information can be easily used to compute corresponding scenes by changing the values of the surround under the new illumination conditions. 
Color appearance models, m, commonly operate computing the perceptual description of the test, Utest=(Brightness, Hue, Saturation), from the tristimulus descrption of the test, Ttest , and the tristimulus description of the surround, Tsurr :

Utest = m( Ttest,Tsurr )

When illumination conditions change, the tristimulus description of test and surround change, but the perceptual description, U, does not. This can be used to compute color descriptions with color constancy or to compute corresponding scenes by using:

Ttest' = m-1( m( Ttest , Tsurr ) , Tsurr' )

In our work on dichromats perception simulation [Capilla04] we adapt the above corresponding pair ideas to the case of observers with different observation conditions: normals and dichromats, but experiencing the same color perception.
The dichromatic color appearance of a chromatic stimulus T can be described if a stimulus S is found that verifies that a normal observer experiences the same sensation viewing S as a dichromat viewing T. If dichromatic and normal versions of the same color vision model are available (using the parameters p and p'), S can be computed by applying the inverse of the normal model to the descriptors of T obtained with the dichromatic model. We give analytical form to this algorithm, which we call the corresponding-pair procedure: 

S = m-1( m( T , p' ) , p )

The analytical form highlights the requisites that a color vision model must verify for this procedure to be used. To show the capabilities of the method, we apply the algorithm to different color vision models that verify such requisites. This algorithm avoids the need to introduce empirical information alien to the color model used, as was the case with previous methods. The relative simplicity of the procedure and its generality makes the prediction of dichromatic color appearance an additional test of the validity of color vision models. In the example below, we show how different dichromats see the Picasso's Dora Maar:

Color Representation Spaces 

In these review works [Capilla98, Capilla02], we analyse and compare several linear and non-linear colour representation spaces at different physiological levels, show the relationships between spaces and discuss the mathematical properties they should exhibit. At the photoreceptor level we examine the cone excitation and cone contrast spaces. We discuss the second-stage spaces, usually known as ATD-spaces, paying particular attention to Boynton's space and including a non-linear space, the opponent modulation space. In addition, we approach third-stage transformations, which might take place at the cortical level. Finally, we analyse a perceptual ATD-type space, discussing how it might be derived as a third-stage transformation of an LGN-based ATD-space.

CRT Calibration

In this book chapter [Malo02] (in spanish!) we analyze the general problem of color reproduction. However, we restrict ourselves to the solution in CRT monitors. Appropriate references of the mathematical techniques required to solve the problem are given: fitting techniques, solution of non-linear equations and multidimensional interpolation.
Here the experimental procedure for standard (Super VGA) CRT calibration is explained in detail. The results show that luminance and chromaticity can be accurately reproduced with errors smaller than 5%. The procedure described here has been incorporated in the Colorlab Toolbox
The results below show the available colors in a generic CRT monitor.

F.Martinez, M.J. Luque, J. Malo, A. Felipe, J.M. Artigas
Implementations of a novel algorithm for colour constancy
Vision Research . Vol. 37. 13, pp. 1829-1844 (1997)
Abstract Full Text
F. Martinez, V. Arnau, J. Malo, A. Felipe, J.M. Artigas
A new chromatic encoding for machine vision invariant to the change of illumination.
Journal of Optics. Vol.27, 4, pp. 171-181 (1996)
Abstract Full Text
P. Capilla, M.A. Díez, M.J. Luque and J. Malo
Corresponding-pair procedure: a new approach to simulation of dichromatic color perception
J. Opt. Soc. Am. A, Vol. 21, 2, pp 173-186 (2004)
Abstract Full Text
P. Capilla, M.J. Luque y J. Malo
Espacios lineales de representación del color
Fundamentos de Colorimetría (Artigas, Capilla y Pujol Eds.). Cap. 2, pp. 31-54. 
Servei de Publicacions de la Universitat de València (2002)
Abstract Full Text
P. Capilla, J.Malo, M.J.Luque, J.M.Artigas
Colour representation spaces at different physiological levels: a comparative analysis
Journal of Optics. Vol. 29, pp.324-338 (1998)
Abstract Full Text
J. Malo y M.J. Luque
Reproducción del color en monitores
Tecnología del color (Artigas, Capilla y Pujol Eds.). Cap. 5, pp. 165-180. 
Servei de Publicacions de la Universitat de València (2002)
Abstract Full Text