Two new CNN-based color constancy works

Two new CNN-based color constancy works have recently appeared on arXiv by Laakom et. al. Color Constancy Convolutional Autoencoder studies the importance of pre-training for the generalization capability in the color constancy problem. Bag of Color Features For Color Constancy proposes a new approach called Bag of Color Features (BoCF), building upon Bag-of-Features pooling along with self attention mechanisms.

Announcement: Illumination Estimation Challenge @ ISPA 2019

During the 11th International Symposium on Image and Signal Processing and Analysis (ISPA 2019), the University of Zagreb will host the Illumination Estimation Challenge. Researchers in the exciting area of illumination estimation are invited to participate in the challenge. This event will be hosted in wonderful and unique Dubrovnik, Croatia, also known as “The Pearl of the Adriatic”. For more information, please refer to the challenge-website.

Color Constancy by GANs

Generative Adversarial Networks (GANs) have demonstrated remarkable results on many image-to-image translation problems. In that sense, Das et al. formulate the color constancy task as an image-to-image translation problem using GANs. By conducting a large set of experiments on different datasets, they provide an experimental survey on the use of different types of GANs to solve for color constancy i.e. CC-GANs (Color Constancy GANs).

New data set available for evaluation

Recently, Nicola Banic and Sven Loncaric made available a new color constancy evaluation data set. This set contains 1365 exclusively outdoor images taken with a Canon EOS 550D camera in parts of Croatia, Slovenia, and Austria during various seasons. The image ordering with respect to their creation time has been shuffled. In the lower right corner of each image the SpyderCube calibration object is placed. The publication in which it is described in more detail is not published yet, but can be found on arxiv. The data set and more detailed description on how to obtain it can be found here</>.

The Reproduction Angular Error for Performance Evaluation

Recently, Transactions on Pattern Analysis and Machine Intelligence accepted a paper titled “The Reproduction Angular Error for Evaluating the Performance of Illuminant Estimation Algorithms” by Graham Finlayson, Roshanak Zakizadeh and Arjan Gijsenij. In this paper, we make a case for using a different performance metric than the traditional recovery angular error. This new metric, termed reproduction angular error, is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are ‘divided out’. Contrary to the traditional error metric, the new error ties algorithm performance to how the illuminant estimates are actually used in practice. In future work, we strongly recommend using the reproduction angular error when comparing algorithms.

New hyperspectral data sets available

Two new hyperspectral data sets are made available by Foster, Nascimento and Amano. The first consists of sequences of images where the scene is undergoing natural illumination changes. The images were acquired with 1-hour intervals and is described in “Time-lapse ratios of cone excitations in natural scenes” in Vision Research 120, 2016. The second set consists of images where small spheres are used to capture estimates of local illumination spectra, described in “Spatial distributions of local illumination color in natural scenes” in Vision Research 120. More information can be found here.

Additional evaluation metric added: Reproduction error

In collaboration with Graham Finlayson and Roshanak Zakizadeh, the site is augmented with a new evaluation metric: Reproduction error. This metric was proposed at BMVC 2014 as alternative to the often used angular error between estimated illuminant and ground truth illuminant (i.e., the recovery error). The reproduction error evaluates color constancy methods by calculating the angle between the image RGB of a white surface when the actual and estimated illuminants are ‘divided out’. More information on this metric can be found here. This new metric arguably is better suited to predict the actual performance of a color constancy algorithm than the currently often used recovery error. An effort of Roshanak to calculate the reproduction error for the illuminant estimates that can be found on this site is added to the results-section, and can be found here.

New results added: CNN-based Color Constancy

We updated the page with new results! Simone Bianco et al. were so kind to share their results of their 2015 CVPR Workshop paper titled “Color Constancy Using CNNs” (a preprint can be found here). These results are incorporated in our sections results-per-method and results-per-dataset. They also made available the results using the recomputed the ground truth of the reprocessed version of the Color-checker data set (see the downloadable results for more information).

New results added: Color Dog

We updated the page with new results! Nikola Banić and Sven Lončarić shared the results of their method described in their 2015 paper presented at VISAPP . This method uses other methods as voters for a predefined set of illuminants obtained by a learning process. “Color Dog: Guiding the Global Illumination Estimation to Better Accuracy” is incorporated in our section results-per-method and results-per-dataset.

Source-code + data set added: Illuminant Chromaticity from Image Sequences

In 2013, Véronique Prinet, Dani Lischinski and Michael Werman presented their work during the International Conference on Computer Vision on “Illuminant chromaticity from image sequences”. In this paper, they proposed a physically-based method to recover the illuminant from temporal sequences, without the need for user interaction or using strong assumptions and heuristics. The method is based on the assumption that the incident light chromaticity is constant over a short space-time domain, and it can even be applied to scenes illuminated by two global light sources assuming the incident light chromaticity can be modeled by a mixture of the two illuminant colors. Now, they made available the source-code of this method, as well as the data sets they experimented on. For more information on the project, please visit their project website.