The Reproduction Angular Error for Performance Evaluation

Recently, Transactions on Pattern Analysis and Machine Intelligence accepted a paper titled “The Reproduction Angular Error for Evaluating the Performance of Illuminant Estimation Algorithms” by Graham Finlayson, Roshanak Zakizadeh and Arjan Gijsenij. In this paper, we make a case for using a different performance metric than the traditional recovery angular error. This new metric, termed reproduction angular error, is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are ‘divided out’. Contrary to the traditional error metric, the new error ties algorithm performance to how the illuminant estimates are actually used in practice. In future work, we strongly recommend using the reproduction angular error when comparing algorithms.

New hyperspectral data sets available

Two new hyperspectral data sets are made available by Foster, Nascimento and Amano. The first consists of sequences of images where the scene is undergoing natural illumination changes. The images were acquired with 1-hour intervals and is described in “Time-lapse ratios of cone excitations in natural scenes” in Vision Research 120, 2016. The second set consists of images where small spheres are used to capture estimates of local illumination spectra, described in “Spatial distributions of local illumination color in natural scenes” in Vision Research 120. More information can be found here.

Additional evaluation metric added: Reproduction error

In collaboration with Graham Finlayson and Roshanak Zakizadeh, the site is augmented with a new evaluation metric: Reproduction error. This metric was proposed at BMVC 2014 as alternative to the often used angular error between estimated illuminant and ground truth illuminant (i.e., the recovery error). The reproduction error evaluates color constancy methods by calculating the angle between the image RGB of a white surface when the actual and estimated illuminants are ‘divided out’. More information on this metric can be found here. This new metric arguably is better suited to predict the actual performance of a color constancy algorithm than the currently often used recovery error. An effort of Roshanak to calculate the reproduction error for the illuminant estimates that can be found on this site is added to the results-section, and can be found here.

New results added: CNN-based Color Constancy

We updated the page with new results! Simone Bianco et al. were so kind to share their results of their 2015 CVPR Workshop paper titled “Color Constancy Using CNNs” (a preprint can be found here). These results are incorporated in our sections results-per-method and results-per-dataset. They also made available the results using the recomputed the ground truth of the reprocessed version of the Color-checker data set (see the downloadable results for more information).

New results added: Color Dog

We updated the page with new results! Nikola Banić and Sven Lončarić shared the results of their method described in their 2015 paper presented at VISAPP . This method uses other methods as voters for a predefined set of illuminants obtained by a learning process. “Color Dog: Guiding the Global Illumination Estimation to Better Accuracy” is incorporated in our section results-per-method and results-per-dataset.

Source-code + data set added: Illuminant Chromaticity from Image Sequences

In 2013, Véronique Prinet, Dani Lischinski and Michael Werman presented their work during the International Conference on Computer Vision on “Illuminant chromaticity from image sequences”. In this paper, they proposed a physically-based method to recover the illuminant from temporal sequences, without the need for user interaction or using strong assumptions and heuristics. The method is based on the assumption that the incident light chromaticity is constant over a short space-time domain, and it can even be applied to scenes illuminated by two global light sources assuming the incident light chromaticity can be modeled by a mixture of the two illuminant colors. Now, they made available the source-code of this method, as well as the data sets they experimented on. For more information on the project, please visit their project website.

New data set available: Multi-Illuminant Multi-Object (MIMO)

Shida Beigpour, Christian Riess, Joost van de Weijer and Elli Angelopoulou recently published a paper, “Multi-Illuminant Estimation with Conditional Random Fields” in IEEE Transactions on Image Processing (TIP), 23(1), 2014, pages 83-95, in which they introduced a novel two-illuminant color constancy benchmark data set with ground truth. The images are available here and the original high resolution raw data from the camera (.X3F format) are available upon request. More information about the data set can be found at the project website.

New data set added: YACCD2

We updated the page with Yet Another Color Constancy database (Updated)! This set of images is released by the Eidomatics Laboratory of the Department of Computer Science of the Università degli Studi di Milano (Italy), and contains both low dynamic range (LDR) and high dynamic range (HDR) images. More information on this data set can be found here.