OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A

| OPTICS, IMAGE SCIENCE, AND VISION

  • Editor: Franco Gori
  • Vol. 27, Iss. 2 — Feb. 1, 2010
  • pp: 286–294
« Show journal navigation

Study of the photodetector characteristics of a camera for color constancy in natural scenes

Sivalogeswaran Ratnasingam and Steve Collins  »View Author Affiliations


JOSA A, Vol. 27, Issue 2, pp. 286-294 (2010)
http://dx.doi.org/10.1364/JOSAA.27.000286


View Full Text Article

Acrobat PDF (242 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

An algorithm is described to extract two features that represent the chromaticity of a surface and that are independent of both the intensity and correlated color temperature of the daylight illuminating a scene. For mathematical convenience this algorithm is derived using the assumptions that each photodetector responds to a single wavelength and that the spectrum of the illumination source can be represented by a blackbody spectrum. Neither of these assumptions will be valid in a real application. A new method is proposed to determine the effect of violating these assumptions. The conclusion reached is that two features can be obtained that are effectively independent of the daylight illuminant if photodetectors with a spectral response whose full width at half maximum is 80 nm or less are used.

© 2010 Optical Society of America

1. INTRODUCTION

A well-known problem when imaging naturally illuminated scenes is that shadows and other effects can create a scene with a wide dynamic range that can lead to saturation and/or underexposure of parts of a scene. In some applications an equally important problem is that the spectral composition of the illuminant varies. These variations can arise between regions in shadow and in direct illumination within the same scene. However, even larger variations occur in directly illuminated scenes at different times of day [1

1. H. C. Lee, Introduction to Color Imaging Science (Cambridge Univ. Press, 2005), pp. 46–47, 450–459.

, 2

2. S. D. Buluswar and B. A. Draper, “Color machine vision for autonomous vehicles,” Eng. Applic. Artif. Intell. 11, pp. 245–256 (1998). [CrossRef]

] or on different days. These variations make it very difficult, if not impossible, to use otherwise useful color or chromaticity information to find and/or recognize potentially interesting objects within a scene.

Imaging sensors are available from companies including Aptina and Melexis that have a high input dynamic range. Since these sensors can image scenes without saturation and/or underexposure, the more subtle problem can be addressed. The ability to determine the color of a surface independent of the illuminant is known as color constancy. Many methods of achieving color constancy have been proposed, including algorithms that process data that represent the logarithm of narrow photodetector responses [3

3. E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. 61, 1–11 (1971). [CrossRef] [PubMed]

, 4

4. B. K. P. Horn, “Determining lightness from an image,” Comput. Graph. Image Process. 3, (1974), pp. 277–299. [CrossRef]

, 5

5. M. Ebner, Color Constancy, Wiley Series in Imaging Science and Technology (Wiley, 2007).

]. One of these methods, which has the benefits that it is simple and yet applicable when the illuminant is daylight, is the “color constancy at a pixel” algorithm proposed by Finlayson and co-workers [6

6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]

, 7

7. G. D. Finlayson and M. S. Drew. “4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in Proceedings of IEEE International Conference on Computer Vision(IEEE, 2001), pp. 473–480.

]. This algorithm was developed based on the assumption that each of the four photodetectors responds to a single wavelength at different positions in the visible spectrum. Since this type of response will severely limit the number of photons reaching each photodetector this is far from ideal. However, this mathematically “ideal” spectral response was included in the development of the algorithm only for convenience. If the performance of the algorithm is not critically dependent on using photodetectors with a narrow spectral response it may be possible to create cameras whose outputs are suitable for extracting reliable chromaticity information from a daylight illuminated scene despite the diurnal variations in the spectrum of daylight.

In this paper the results of an initial investigation into the effects of changing the spectral response of photodetectors on the results obtained from an algorithm inspired by the work of Finlayson and others [6

6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]

] are presented. Although this means that our primary concern is the width of the spectral response of the photodetectors it is necessary to take into account the effect of noise introduced into the data. The paper starts in Section 2 with a mathematical model of the photon flux incident on a photodetector. This model is then simplified, and a method of combining the responses of four photodetectors to create two descriptors or features that are independent of the illuminator is described. In Section 3 four example surface reflectances are used to explain why reflectances that correspond to different colors will be projected to different locations in the two-dimensional feature space. This prediction is then confirmed in Section 4 by estimating the response of photodetectors using numerical simulation. A method is then proposed in Section 5 to assess the impact of using data from photodetectors that respond to a range of wavelengths on the two features. Finally in Section 6 the effects of both subtle changes in the spectral characteristics of the photodetectors and the reflectance data are presented to ensure that any conclusions are independent of the details used to obtain the results.

2. ALGORITHM

An image is formed when light from an illuminator is reflected from different parts of a scene into an array of detectors. If the intensity of the illuminator is I and its output spectrum is E(λ), then the response, Rx,E, of a photodetector with a spectral sensitivity function F(λ) that is imaging part of a scene with a reflectance Sx(λ) at a point x on a surface is given by [6

6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]

, 8

8. G. D. Finlayson, B. Schiele, and J. L. Crowley, “Comprehensive colour image normalization,” in H. Burkhard and B. Neumann, eds., Computer Vision—ECCV’98 (Springer, 1998), pp. 475–490.

]
Rx,E=a̱xṉxIωSx(λ)E(λ)F(λ)dλ,
(1)
where an underscore denotes a vector quantity. The dot product a̱xṉx between the unit vector a̱x representing the direction of the light source and unit vector ṉx representing the direction of the surface normal models a geometry factor that influences the amount of reflected light.

This expression for the response of the photodetector can be considerably simplified if it is assumed that its spectral sensitivity function is narrow enough that it can be presented by a Dirac delta function. The sifting property of the Dirac delta function can then be applied to simplify Eq. (1), so that for a photodetector that is effectively sensitive to light only at a wavelength λi,
Rx,E=a̱xṉxISx(λi)E(λi).
(2)
The different components of Eq. (2) can be separated by taking the logarithm of both sides of equation (2) [3

3. E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. 61, 1–11 (1971). [CrossRef] [PubMed]

, 4

4. B. K. P. Horn, “Determining lightness from an image,” Comput. Graph. Image Process. 3, (1974), pp. 277–299. [CrossRef]

, 6

6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]

]. In particular the logarithm of the response of a photodetector can be written in the form
log(Rx,E)=log{GI}+log{E(λi)}+log{Sx(λi)},
(3)
where G(=a̱xṉx) is the geometry factor.

In scenes that are illuminated by daylight there are two potential problems. First, shadows can result in different parts of the scene having significantly different effective illumination intensities I. This can cause saturation or underexposure in parts of some scenes unless a camera with a high enough dynamic range is used. The more subtle effect is that the relative responses of the photodetectors that are sensitive to different wavelengths are influenced by both the reflectance being imaged and the spectrum of the daylight.

In naturally illuminated scenes, changes in the spectral content of daylight can influence the apparent ratio of the responses of different photodetectors and hence the color of an area in a scene. Studies have shown that the spectrum of daylight is quite similar to that of a blackbody [6

6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]

, 7

7. G. D. Finlayson and M. S. Drew. “4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in Proceedings of IEEE International Conference on Computer Vision(IEEE, 2001), pp. 473–480.

]. In fact they are sufficiently similar that daylight spectra are often described using the temperature of the blackbody with the most similar spectrum, a parameter known as the correlated color temperature (CCT) for the particular daylight spectrum. This similarity means that the power spectrum of a blackbody is a useful approximation when developing an algorithm to deal with the diurnal changes in the spectrum of daylight. The output spectrum for a blackbody with a temperature T, L(λ,T), can be calculated using Planck’s equation,
L(λ,T)=2hc2λ51(ehckBTλ1),
(4)
where h is the Planck’s constant, kB is the Boltzmann constant, and c is the speed of light. Over the wavelength and CCT ranges of interest the blackbody spectrum can be approximated by the Wien approximation [6

6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]

],
E(λ,T)=c1λ5eC2Tλ,
(5)
where C1=2hc2 and C2=hckB. Substituting Eq. (5) into Eq. (3) then gives
log(Ri)=log(GI)+log(c1λi5Si)c2Tλi,
(6)
where the equation is written in this form to emphasize a wavelength-independent component, a component that depends on the reflectance of the surface being imaged, and a component that depends on the CCT of the illuminant. To obtain an illuminant-independent descriptor of the surface reflectance both the first and third terms in Eq. (6) need to be removed.

Following a procedure similar to that adopted by Marchant and Onyango [9

9. J. A. Marchant and C. M. Onyango “Shadow-invariant classification for scenes illuminated by daylight,” J. Opt. Soc. Am. A 17, 1952–1961 (2000). [CrossRef]

] the two undesirable components in Eq. (6) can be removed by taking the log-difference between the response of one detector and the weighted sum of the responses of two other detectors to form the descriptor or feature F1:
F1=log(R2){αlog(R1)+(1α)log(R3)}.
(7)
Substituting Eq. (6) into Eq. (7) will cancel the wavelength-independent components of the photodetector responses:
F1=log(c1λ25S2){αlog(c1λ15S1)+(1α)log(c1λ35S3)}+c2Tλ2{αc2Tλ1+(1α)c2Tλ3}.
(8)
This feature can then be made independent of the CCT of the illuminator if
c2Tλ2{αc2Tλ1+(1α)c2Tλ3}=0,
(9)
which simplifies to
1λ2=αλ1+1αλ3.
(10)
A feature that is independent of the illuminator can therefore be obtained using this equation to select the value α and the wavelengths at which the three photodetectors respond.

With only one feature, quite different reflectances can be confused [5

5. M. Ebner, Color Constancy, Wiley Series in Imaging Science and Technology (Wiley, 2007).

]. To avoid this confusion a second illuminator-independent feature is required. To obtain this feature the output from a fourth photodetector can be combined with those of two of the existing photodetectors to create a second feature F2:
F2=log(R3){γlog(R2)+(1γ)log(R4)}.
(11)
As with F1 this second feature will be independent of the illuminator if
1λ3=γλ2+1γλ4.
(12)

The two illuminator-independent features rely on the choice of six variables (four wavelengths and two mixture coefficients) subject to two constraints [Eqs. (10, 12)]. This means that there are four variables whose values can be chosen independently. When choosing the values for these variables it is sensible to ensure that information from different parts of the visible spectrum is employed. There are different combinations of variable values that are consistent with this aim, including those in Table 1 . This set of values has been chosen to cover the wavelength range from 400nmto700nm. With four photodetectors spread uniformly across this range the difference between the characteristic wavelengths of neighboring photodetectors would be 75nm. The photodectors with the shortest and longest wavelength responses have therefore been placed 37.5nm from each end of the relevant wavelength range, at 437.5nm and 662.5nm. The wavelengths of the other two photodetectors have then been calculated using Eqs. (10, 12), so that α=0.5 and γ=0.5.

3. TWO-DIMENSIONAL FEATURE SPACE

Applying Eqs. (7, 11) to the photodetector responses will lead to two features that are ideally independent of the illuminator. The perceived color of a surface within an image partly depends upon its relative lightness. Since the relative lightness of a surface is indistinguishable from a change in the local illuminator intensity this information is lost in Eqs. (7, 11). The two features that are obtained from these two equations therefore represent the chromaticity of the surface rather than its color.

To be robust to noise in the photodetector responses the two features should form a space in which very different chromaticities are widely separated. To understand why different chromaticites are widely separated in the two-dimensional feature space (F1,F2) consider the four representative reflectances in Fig. 1 . With the parameters listed in Table 1 substituting Eq. (6) into Eqs. (7, 11) gives
F1=log(S2)12{log(S1)+log(S3)}+K1,
(13)
F2=log(S3)12{log(S2)+log(S4)}+K2,
(14)
where K1 and K2 are constants. This shows that for this choice of parameters the features are independent of the illuminator.

Equations (13, 14) show that the responses of four photodetectors can be used to extract two illuminant independent chromaticity features. From these four photodetector responses the other combinations of the responses of three photodetectors are linear combinations of these two features. In principle more information about a surface can be obtained using photodetectors with more than four different spectral responses. In particular, the process of creating a feature space that is independent of both the intensity and CCT of the illuminant is expected to result in a space that has two dimensions fewer than the number of different types of photodetectors. In some applications these extra dimensions could be useful. For example if we want to estimate reflectance of a color surface in a higher dimension, this algorithm can be adopted to n-color channels by taking three responses at a time to form an illuminant-independent feature as given in Eq. (7). In this case the estimated reflectance features will be in (n-2)-dimensional space. However, four different types of photodetectors can be easily accommodated in the Bayer color filter pattern, commonly used in color cameras, and the resulting two-dimensional feature space is sufficient to represent the chromaticity of a surface.

Assume that F1 is mapped onto the x axis and F2 is mapped onto the y axis of an illuminant-independent chromaticity space. Figure 1 shows that the reflectance of the gray surface is independent of wavelength. This means that all the photodetector responses are identical, and therefore this surface will be mapped to point (K1,K2) in the feature space. The relative position of the other surfaces with respect to the position of the gray surface can then be predicted as follows. Consider the reflectance spectra of the blue surface in Fig. 1. For this surface photodetector 1 will have the strongest response, photodetector 2 will have a strong response, but the responses of photodetectors 3 and 4 will be weaker. These relative values will mean that for this reflectance F1 is larger than K1 but F2 is smaller than K2. In the feature space this blue surface will therefore appear to the right and below the gray surface. The equivalent processes for the green and red surfaces in Fig. 1 lead to the conclusion that green will be to the right and above gray while the red will be to the left and above gray. These colors will therefore be well separated in the two-dimensional feature space. More important, their relative positions have been determined using the ratio of reflectances at different wavelengths. This means that surfaces with similar reflectances will be projected to neighboring parts of the feature space.

4. SIMULATED FEATURE SPACE

For simplicity the discussion so far has been based upon the assumption that the photodetectors respond only to a single wavelength. Technologically it is difficult to make a detector with this type of response. Equally important, photodetectors with a very narrow spectral response will be starved of photons and hence have a very poor sensitivity. To determine if it is possible to use this algorithm it is critically important to determine the effect of using photodetectors that respond to a range of different wavelengths. The effect of changing the spectral response of the photodetectors has been investigated using a Gaussian function to represent the spectral response of each photodetector. For a range of Gaussian photodetector model widths the response of each photodetector has been modeled by simulating Eq. (1) for a range of different illuminators and surface reflectances.

Although the theory leading to the features have been derived assuming a blackbody illuminator the numerical integration (simulation) has been performed using CIE standard daylight spectra [10

10. Munsell Color Science Laborartory, “Daylight spectra,” http://mcsl.rit.edu/.

]. These standard spectra are generated from three basis functions whose contribution to the final spectrum is determined by the CCT. Daylight spectra measured at different times and locations suggest that daylight spectra correspond to different ranges of CCT [11

11. V. D. P. Sastri and S. R. Das, “Spectral distribution and color of north sky at Delhi,” J. Opt. Soc. Am. 56, 829–830 (1966). [CrossRef]

, 12

12. Y. Nayatani and G. Wyszecki, “Color of daylight from north sky,” J. Opt. Soc. Am . 53, 626–629 (1963). [CrossRef]

, 13

13. T. Henderson and D. Hodgkiss, “The spectral energy distribution of daylight,” Br. J. Appl. Phys. 14, 125–133 (1963). [CrossRef]

]. However, most of the measured daylight spectra fall in the CCT range from 5000Kto9000K and the CIE standard daylight spectra represent measured data quite accurately below 9000K [14

14. J. Hernández-Andrés, J. Romero, J. L. Nieves, and R. L. Lee Jr., “Color and spectral analysis of daylight in southern Europe,” J. Opt. Soc. Am. A 18, 1325–1335 (2001). [CrossRef]

]. For this investigation 14 different spectra with CCTs between 5000K and 9000K have therefore been used. The particular values used (5000K, 5500K, 5550K, 5600K, 5650K, 5700K, 5750K, 6000K, 6200K, 6500K, 6700K, 7000K, 8000K and 9000K) were chosen to represent the non-uniform distribution of CCTs from measured daylight spectra [14

14. J. Hernández-Andrés, J. Romero, J. L. Nieves, and R. L. Lee Jr., “Color and spectral analysis of daylight in southern Europe,” J. Opt. Soc. Am. A 18, 1325–1335 (2001). [CrossRef]

].

The surface reflectances that were used for this study were the Munsell reflectance samples [15

15. Database—“Munnsell Colours Matt,” ftp://ftp.cs.joensuu.fi/pub/color/spectra/mspec/.

] widely used in color research [16

16. L. T. Maloney, “Illuminant estimation as cue combination,” J. Vision 2, 493–504 (2002). [CrossRef]

, 17

17. G. D. Finlayson and M. S. Drew, “White-point preserving color correction,” in Proceedings of IS&T/SID 5th Color Imaging Conference (Society for Imaging Science and Technology, 1997), pp. 258–261.

]. These data were sampled at 1nm intervals, and the response of each detector to the different Munsell reflectances was obtained by integrating the product of Munsell reflectance, the CIE standard daylight spectra, and the Gaussian photodetector sensitivity model over the wavelength range from 400nmto700nm.

A typical feature space obtained using the 14 CIE standard daylights and 202 Munsell reflectances with similar relative luminance is shown in Fig. 2 . In this figure each cross represents the actual color of the Munsell reflectance when illuminated with one of the 14 daylight spectra. The blues, greens, and reds occur in the expected relative positions in the space. Also as expected most similar colors are near neighbors in the feature space. A closer inspection of the feature space shows that the imperfect cancellation of the changes in the daylight spectra means that each of the Munsell reflectances creates a small cluster of responses in the feature space. The size of these clusters depends on the spectral width of the detectors that are being used. A method is required to determine the significance of the area occupied by each cluster of responses that correspond to the same reflectance. This method can then be used to determine the widths of the spectral responses that may be used to obtain data from which features can be extracted.

5. ASSESSMENT OF THE FEATURE SPACE

Rather than assessing the feature space for a particular application the approach that has been adopted is to compare the size of each cluster of responses with a measure of the perceptual similarity of the colors of reflectances that create neighboring clusters in the feature space. A color space that has been defined so that distances between colors within the space are proportional to their perceptual differences is the CIELab space [5

5. M. Ebner, Color Constancy, Wiley Series in Imaging Science and Technology (Wiley, 2007).

]. In this space colors that are separated by a Euclidean distance of one unit are just noticeably different. However, just noticeable differences are difficult to detect, and the differences between colors that are separated by between 3.0 and 6.0 units have been described as good matches [18

18. J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” Ph.D. dissertation (Ecole Nationale Supérieure des Télécommunications, 1999).

, 19

19. A. Abrardo, V. Cappellini, M. Cappellini, and A. Mecocci “Art-works colour calibration using the VASARI scanner,” in Proceedings of IS&T and SID’s 4th Color Imaging Conference: Color Science, Systems and Applications (Society for Imaging Science and Technology,1996), pp. 94–97.

]. This suggests that the size of each cluster in the feature space should be compared to the separation of reflectances that are separated in CIELab space by a few units. An important factor to take into account when comparing CIELab coordinates is that the L value of each reflectance spectrum represents its relative luminance when viewed by an observer whose eyes are adapted to a particular light level. The feature space has been designed to be independent of relative lightness and therefore independent of L. The sizes of the clusters of responses in the feature space have therefore been assessed using reflectances with very similar L values. In the CIELab space the value of L varies from 100 for the brightest colors to 0 for absolute black. Examination of the distribution of L values of the 1269 Munsell data set showed that the L values of the reflectances in this data set were close to one of a small number of values. An L value of 50 is used as the reference L value in the CIE standard 1994 color difference model (CIE94) [1

1. H. C. Lee, Introduction to Color Imaging Science (Cambridge Univ. Press, 2005), pp. 46–47, 450–459.

], and there are 187 reflectances with L values between 47.8 and 50.2 in the Munsell data set. The Munsell samples with L values in this range were therefore used as the test data to assess the feature space. To obtain 100 pairs of test reflectances, the distances between different pairs of these 187 reflectances were compared. If the difference between the L values of a pair was smaller than 0.5 units then the CIELab distance between the pair was calculated. Discarding any remaining pairs that were separated by more than six CIELab units it was necessary to use pairs that were separated by between 4.6 and 6.0 units in CIELab space to obtain 100 pairs of perceptually similar reflectances.

In earlier work the size of the cluster of responses corresponding to a particular reflectance was characterized using the smallest circle enclosing all the relevant responses [20

20. S. Ratnasingam, S. Collins, and J. Hernández-Andrés, “A method for designing and assessing sensors for chromaticity constancy in high dynamic range scenes,” in Proceedings of Color Imaging Conference CIC17 (EEUU, 2009), pp 15–20.

]. This is a simple method of determining the area covered by a set of responses. However, this method does not take into account the fact that, as Fig. 2 shows, the responses from a particular reflectance are not uniformly distributed around the average position of the relevant responses. Furthermore, in each part of the feature space the responses from each reflectance tend to have similar orientations.

To account for the observed distribution of responses it is more appropriate to use the Mahalanobis distance, rather than the Euclidean distance implied when using a circle, to determine a boundary that ideally encloses all points in a cluster. For a multivariable normal distribution, the Mahalanobis distance between the center of the distribution C and a point P is defined as
DM2=(PC)Σ1(PC),
(15)
where Σ is the covariance matrix of the distribution. The first step in determining a boundary for a particular reflectance was to find the center of each cluster of responses using the average position of all the responses in the cluster. The points on a boundary at a small Mahalanobis distance from the center were then calculated for both pairs of reflectances with very similar CIELab values. The Mahalanobis distance from the cluster center to these boundaries was then gradually increased until the boundaries touched. The typical result in Fig. 3 shows that these boundaries enclosed most of the relevant responses. To assess the dependency of the feature space on the illuminator spectrum the number of responses that fall inside the correct boundary in the pair was then counted. This test was performed on all 100 pairs of reflectances in the test data set and the percentage of points falling within the boundary was recorded.

The results in Fig. 4 show the effect of varying the FWHM of the photodetectors spectral response from 20nmto200nm. The first set of results that were obtained suggested that if the photodetector responses are represented to an infinite precision, the width of the spectral response had very little effect on the usefulness of the extracted features. As a result the performance of the algorithm did not vary much when increasing the FWHM of the photodetectors used to capture the image. The large overlap between the wide photodetector spectral responses means that for the photodetectors with wide spectral responses the extraction of the features relies on small differences between very similar responses. In a real system these small differences could be lost in the system noise. Thus, although the focus of this study is on the effects of changing the photodetectors’ spectral response, it is important to model the impact of noise on the effective precision of the available data. The signal-to-noise ratio (SNR) of data available from any camera depends on multiple factors including the charge storage capacity of each pixel, the noise introduced by the readout electronics, and the photon shot noise [21

21. B. Fowler, “High dynamic range image sensor architectures,” High Dynamic Range Imaging Symposium and Workshop, Stanford University, California (2009), http://scien.stanford.edu/HDR/HDR_files/Conference%20Materials/Presentation%20Slides/Fowler_WDR_sensor_architectures_9_8_2009.pdf.

]. The SNRs expected from typical digital cameras have been estimated using the method and parameters described by Fowler [21

21. B. Fowler, “High dynamic range image sensor architectures,” High Dynamic Range Imaging Symposium and Workshop, Stanford University, California (2009), http://scien.stanford.edu/HDR/HDR_files/Conference%20Materials/Presentation%20Slides/Fowler_WDR_sensor_architectures_9_8_2009.pdf.

]. The results in Fig. 5 show the expected SNR of a cell phone camera with a charge storage capacity of 5,000 electrons and a CMOS camera with a charge storage capacity of 106 electrons. These results show that the reduction in pixel size, and hence charge storage capacity, needed to match the price targets in the cost sensitive cell phone market degrades the available SNR. However the better quality CMOS imagers used in cameras give an SNR of more than 26dB for all the photocurrents that can be detected when a 10 bit analog/digital converter is used to represent the response from each pixel.

To obtain a more realistic indication of the impact of varying the photodetector FWHM Fig. 4 shows the results obtained when the SNR of the data from the photodetectors was increased from 26dB. To assess the impact of noise, different levels of Gaussian noise were added to the linear photodetector responses, and at each level of noise 100 examples of the nominally same combination of responses were generated. That is, each Munsell reflectance when illuminated with a single daylight spectrum forms 100 points on the chromaticity space. These 100 points were obtained by adding 100 points of Gaussian noise to the original response of the photodetector. The results in Fig. 4 show that as the SNR is increased the number of points that fall within the correct boundary increases. This trend arises because the noise will reduce the effectiveness of the cancellation of the illuminator-dependent components of the detector responses. Since the feature extraction procedure was based on the assumption of narrow spectral responses the surprising aspect of the results in Fig. 4 is that with a SNR of 40dB good results are obtained for photodetectors with a FWHM of 80nm or less. A FWHM of 80nm is comparable to the FWHM of photodetector responses in conventional color cameras such as the DXC930 [7

7. G. D. Finlayson and M. S. Drew. “4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in Proceedings of IEEE International Conference on Computer Vision(IEEE, 2001), pp. 473–480.

]. The sensitivities of the photodetectors needed to generate the two features are therefore expected to be comparable to the sensitivity of photodetectors in existing cameras.

6. ROBUSTNESS OF THE FEATURE SPACE

The results that have been obtained are dependent on the assumptions and the data that have been used. To ensure that any conclusions are independent of the assumptions about the details of the responses of the photodetectors and the reflectance data, other results have been obtained.

As pointed out in Section 2 the wavelengths of the peak spectral responses of the photodetectors in Table 1 are only one of a number of possible combinations. Another combination that has been investigated extensively is given in Table 2 . This combination of parameters was obtained by spacing the photodetectors uniformly across the wavelength range of interest (400nmto700nm) and then calculating the corresponding values of α and γ. Since these two values are close to 0.5 it appears that this choice of detectors is as good as the choice in Table 1.

Figure 6 shows some of the results obtained with the features calculated using this second choice of detectors compared with the equivalent results obtained with the parameters in Table 1. This comparison shows that for these two sensible choices of photodetectors the results obtained are very similar. Most important the effects of changing the SNR and varying the FWHM of the detectors are comparable.

The explanation for the impact of varying the SNR on the photodetectors with the wider spectral responses shown in Fig. 4 was based on different amounts of correlation between the different photodetector responses. The degree of correlation between the responses of different photodetectors will be affected by aspects of the shape of the spectral response that are not captured by the FWHM of the photodetector response. To study the possible effects of other aspects of the spectral response of the photodetectors the model of the detector response has been varied. In particular, in addition to the Gaussian model, the photodetector spectral response has also been modeled using a parabola and a Lorentzian function. As shown in Fig. 7 these three sensitivity models have been chosen because for the same FWHM the parabola is sensitive to a narrower range of wavelengths than the Gaussian, while the Lorentzian has a broader response than the Gaussian.

The results obtained as the FWHM of the three different models of the detector were varied are shown in Fig. 8 . Although the details of the results for the different models are different the general trend is still that increasing the FWHM of the detectors degrades the illumination independence of the two features. A comparison of the results from the three different models for the same FWHM values confirms that increasing the amount of overlap between detector responses degrades the quality of the features. Despite these differences the results are consistent with the conclusion that the useful illuminator independent features can be obtained when the SNR is 40dB or higher from photodetectors that have a spectral response with a FWHM of 80nm or less.

To check the robustness of the conclusions to changes in the reflectance data, results have also been obtained using the reflectance spectra of different types of flowers from around the world [22

22. S. E. J. Arnold, V. Savolainen, and L. Chittka, “The floral reflectance spectra database,” Nature Proceedings http://dx.doi.org/10.1038/npre.2008.1846.1.

]. Again these data were searched to find pairs of measured reflectance spectra that correspond to perceptually similar colors with a CIELab L value of approximately 50. In this case 100 pairs could be obtained using CIELab differences between 4.35 and 6.0 units. Figure 9 shows the results obtained with both sets of reflectance data. The performance of the algorithm with both sets of reflectance data is comparable. This suggests that the detail of the results obtained is independent of the data used. However the most important feature of the results in Fig. 9, and similar results, is that they are consistent with a conclusion that useful illuminator-independent data can be obtained when the SNR is at least 40dB and the photodetectors have spectral responses with FWHMs of 80nm or less.

7. DISCUSSION AND CONCLUSION

There are two challenges when imaging scenes illuminated by daylight. The first is that shadows and other effects can create a scene with a wide dynamic range. The second, more subtle, problem is that diurnal changes in the spectrum of daylight can cause variations in the apparent color of surfaces within the scene. These diurnal variations make it very difficult, if not impossible, to use otherwise valid color or chromaticity information to find and/or recognize potentially interesting objects within a scene.

An approach for solving both of these problems based on photodetectors with a response to light at a single wavelength has been proposed by Finlayson and co-workers [6

6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]

, 7

7. G. D. Finlayson and M. S. Drew. “4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in Proceedings of IEEE International Conference on Computer Vision(IEEE, 2001), pp. 473–480.

]. The assumption that the photodetector responds to a single wavelength was introduced into the algorithm for mathematical convenience. With the resulting simplified mathematical model it is possible to understand how the algorithm creates illumination-independent features for any illuminator whose spectrum can be approximated by that of a blackbody. It is also possible to understand why different colors appear in different positions in the two-dimensional feature space.

The simple algorithm obtained assuming a photodetector that responds to a single wavelength could be very useful. However, there will be very few photons at a particular wavelength, and so a photodetector with such a narrow spectral response will have a very low input signal. It is therefore important to determine how the results from the algorithm are affected when the width of the spectral response of the photodetectors is increased. In addition the algorithm is based on the assumption that the spectrum of the illumination source is a blackbody spectrum. The effects of using CIE standard daylights as the illuminator and varying the width of the spectral response of the photodetectors have been studied. This study showed that these effects mean that the two features that are ideally independent of the illuminator have a residual illuminator dependence. A method of assessing this residual dependence by comparing the illuminator-induced variation in the extracted features to the difference between the features extracted from very similar colors was proposed. Initial results obtained with photodetector responses represented to a very high precision suggested that the algorithm was effective when used on the responses of photodetectors with very wide spectral responses. However, this is only possible if the algorithm is exploiting very small differences between the responses of the different photodetectors. To obtain a more realistic estimate of the effect of changing the width of the spectral responses of the photodetectors, noise was added to the data used in the algorithm. The results that have been presented suggest that with an SNR of better than 40dB the two proposed features can be obtained from the responses of photodetectors whose spectral responses have a FWHM of less than 80nm. In many situations when the SNR from a single pixel is worse than 40dB it may be possible to average the responses of neighboring pixels with almost identical extracted features to reduce the overall effective SNR. Using this method on as few as nine pixels should increase the SNR by 10dB. The results in Fig. 4 can lead to a significant improvement in the usefulness of the extracted features.

In conclusion, the residual illuminant dependency of a feature space formed by the proposed algorithm for solving color constancy in daylight illuminated scenes has been investigated. Mathematical analysis of the feature space has been presented. A method was then proposed to assess the impact on the illumination independence of the feature space for different photodetector spectral responses. The significance of any residual illuminator dependence was tested with perceptually similar colors while varying the illuminant spectra. The results suggest that when the SNR is better than 40dB for photodetectors with a FWHM of 80nm or less the illumination dependency of the feature space is small enough to identify colors described as good matches to the human visual system. These initial results are promising enough to justify further work on a range of issues, including the impact of the response characteristics of the different types of pixels that could be used to obtain the data required by the algorithm. Our future work will be extracting illuminant-independent reflectance images in a higher-dimensional space.

ACKNOWLEDGMENT

This research work was supported by the Engineering and Physical Sciences Research Council (UK) (EPSRC).

Table 1. Parameters of the First Set of Photodetectors

table-icon
View This Table
| View All Tables

Table 2. Parameters of a Second Set of Photodetectors

table-icon
View This Table
| View All Tables
Fig. 1 Reflectance spectra of four different Munsell colors, with the wavelengths of the four different photodetector responses indicated by P1, P2, P3, and P4.
Fig. 2 Two-dimensional feature space formed with 80nm FWHM equal weight photodetectors when applying 202 Munsell surfaces and 14 CIE standard daylights. Each cross is the color of the relevant Munsell color.
Fig. 3 Pair of clusters of responses in the feature space and the boundary of equal Mahalanobis distance metric of both clusters when they touch each other.
Fig. 4 Test results when applying Mahalanobis distance as the metric in finding the residual dependency of the features extracted from test data set. Munsell test samples separated by between 4.6 to 6.0 CIELab units. 14 CIE daylight spectra were applied in this test.
Fig. 5 Expected SNR of two types of pixel models using parameters and method described by Fowler [21]. The dashed curve represents the expected SNR of a low cost cell phone camera, while the solid curve represents the expected SNR of a CMOS camera. If the CMOS cameras output is represented using 10 bits then the lowest photocurrents that can be detected are approximately three orders of magnitude less than the maximum detected photocurrent.
Fig. 6 Test results for the uniform spread and equal weight sensor set. Munsell samples were illuminated with 14 CIE standard daylight spectra with CCT varying between 5000K and 9000K and Mahalanobis distance was applied as the test metric.
Fig. 7 Sensitivity of the three models of Gaussian, Lorentzian, and parabola at 80nm FWHM.
Fig. 8 Test results of the algorithm when applying different sensor models. The sensor responses were generated with sensors positions listed in Table 1 and different FWHM varying between 20nm and 200nm. Munsell test set separated by between 4.6 to 6.0 CIELab units and 14 CIE standard daylight spectra (5000Kto9000K) were applied in this test.
Fig. 9 Test results of the algorithm when applying Munsell and floral data set to equal weight sensor responses. Mahalanobis distance metric was applied as the metric with 14 spectra of CIE standard daylights.
1.

H. C. Lee, Introduction to Color Imaging Science (Cambridge Univ. Press, 2005), pp. 46–47, 450–459.

2.

S. D. Buluswar and B. A. Draper, “Color machine vision for autonomous vehicles,” Eng. Applic. Artif. Intell. 11, pp. 245–256 (1998). [CrossRef]

3.

E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. 61, 1–11 (1971). [CrossRef] [PubMed]

4.

B. K. P. Horn, “Determining lightness from an image,” Comput. Graph. Image Process. 3, (1974), pp. 277–299. [CrossRef]

5.

M. Ebner, Color Constancy, Wiley Series in Imaging Science and Technology (Wiley, 2007).

6.

G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]

7.

G. D. Finlayson and M. S. Drew. “4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in Proceedings of IEEE International Conference on Computer Vision(IEEE, 2001), pp. 473–480.

8.

G. D. Finlayson, B. Schiele, and J. L. Crowley, “Comprehensive colour image normalization,” in H. Burkhard and B. Neumann, eds., Computer Vision—ECCV’98 (Springer, 1998), pp. 475–490.

9.

J. A. Marchant and C. M. Onyango “Shadow-invariant classification for scenes illuminated by daylight,” J. Opt. Soc. Am. A 17, 1952–1961 (2000). [CrossRef]

10.

Munsell Color Science Laborartory, “Daylight spectra,” http://mcsl.rit.edu/.

11.

V. D. P. Sastri and S. R. Das, “Spectral distribution and color of north sky at Delhi,” J. Opt. Soc. Am. 56, 829–830 (1966). [CrossRef]

12.

Y. Nayatani and G. Wyszecki, “Color of daylight from north sky,” J. Opt. Soc. Am . 53, 626–629 (1963). [CrossRef]

13.

T. Henderson and D. Hodgkiss, “The spectral energy distribution of daylight,” Br. J. Appl. Phys. 14, 125–133 (1963). [CrossRef]

14.

J. Hernández-Andrés, J. Romero, J. L. Nieves, and R. L. Lee Jr., “Color and spectral analysis of daylight in southern Europe,” J. Opt. Soc. Am. A 18, 1325–1335 (2001). [CrossRef]

15.

Database—“Munnsell Colours Matt,” ftp://ftp.cs.joensuu.fi/pub/color/spectra/mspec/.

16.

L. T. Maloney, “Illuminant estimation as cue combination,” J. Vision 2, 493–504 (2002). [CrossRef]

17.

G. D. Finlayson and M. S. Drew, “White-point preserving color correction,” in Proceedings of IS&T/SID 5th Color Imaging Conference (Society for Imaging Science and Technology, 1997), pp. 258–261.

18.

J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” Ph.D. dissertation (Ecole Nationale Supérieure des Télécommunications, 1999).

19.

A. Abrardo, V. Cappellini, M. Cappellini, and A. Mecocci “Art-works colour calibration using the VASARI scanner,” in Proceedings of IS&T and SID’s 4th Color Imaging Conference: Color Science, Systems and Applications (Society for Imaging Science and Technology,1996), pp. 94–97.

20.

S. Ratnasingam, S. Collins, and J. Hernández-Andrés, “A method for designing and assessing sensors for chromaticity constancy in high dynamic range scenes,” in Proceedings of Color Imaging Conference CIC17 (EEUU, 2009), pp 15–20.

21.

B. Fowler, “High dynamic range image sensor architectures,” High Dynamic Range Imaging Symposium and Workshop, Stanford University, California (2009), http://scien.stanford.edu/HDR/HDR_files/Conference%20Materials/Presentation%20Slides/Fowler_WDR_sensor_architectures_9_8_2009.pdf.

22.

S. E. J. Arnold, V. Savolainen, and L. Chittka, “The floral reflectance spectra database,” Nature Proceedings http://dx.doi.org/10.1038/npre.2008.1846.1.

OCIS Codes
(330.0330) Vision, color, and visual optics : Vision, color, and visual optics
(330.1690) Vision, color, and visual optics : Color
(330.1720) Vision, color, and visual optics : Color vision
(330.1730) Vision, color, and visual optics : Colorimetry
(150.1135) Machine vision : Algorithms
(150.6044) Machine vision : Smart cameras

ToC Category:
Vision, Color, and Visual Optics

History
Original Manuscript: August 5, 2009
Revised Manuscript: December 16, 2009
Manuscript Accepted: December 21, 2009
Published: January 25, 2010

Virtual Issues
Vol. 5, Iss. 4 Virtual Journal for Biomedical Optics

Citation
Sivalogeswaran Ratnasingam and Steve Collins, "Study of the photodetector characteristics of a camera for color constancy in natural scenes," J. Opt. Soc. Am. A 27, 286-294 (2010)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-27-2-286


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. H. C. Lee, Introduction to Color Imaging Science (Cambridge Univ. Press, 2005), pp. 46-47, 450-459.
  2. S. D. Buluswar and B. A. Draper, “Color machine vision for autonomous vehicles,” Eng. Applic. Artif. Intell. 11, pp. 245-256 (1998). [CrossRef]
  3. E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. 61, 1-11 (1971). [CrossRef] [PubMed]
  4. B. K. P. Horn, “Determining lightness from an image,” Comput. Graph. Image Process. 3, (1974), pp. 277-299. [CrossRef]
  5. M. Ebner, Color Constancy, Wiley Series in Imaging Science and Technology (Wiley, 2007).
  6. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253-264 (2001). [CrossRef]
  7. G. D. Finlayson and M. S. Drew. “4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in Proceedings of IEEE International Conference on Computer Vision(IEEE, 2001), pp. 473-480.
  8. G. D. Finlayson, B. Schiele, and J. L. Crowley, “Comprehensive colour image normalization,” in H.Burkhard and B.Neumann, eds., Computer Vision--ECCV'98 (Springer, 1998), pp. 475-490.
  9. J. A. Marchant and C. M. Onyango “Shadow-invariant classification for scenes illuminated by daylight,” J. Opt. Soc. Am. A 17, 1952-1961 (2000). [CrossRef]
  10. Munsell Color Science Laborartory, “Daylight spectra,” http://mcsl.rit.edu/.
  11. V. D. P. Sastri and S. R. Das, “Spectral distribution and color of north sky at Delhi,” J. Opt. Soc. Am. 56, 829-830 (1966). [CrossRef]
  12. Y. Nayatani and G. Wyszecki, “Color of daylight from north sky,” J. Opt. Soc. Am . 53, 626-629 (1963). [CrossRef]
  13. T. Henderson and D. Hodgkiss, “The spectral energy distribution of daylight,” Br. J. Appl. Phys. 14, 125-133 (1963). [CrossRef]
  14. J. Hernández-Andrés, J. Romero, J. L. Nieves, and R. L. Lee, Jr., “Color and spectral analysis of daylight in southern Europe,” J. Opt. Soc. Am. A 18, 1325-1335 (2001). [CrossRef]
  15. Database--“Munnsell Colours Matt,” ftp://ftp.cs.joensuu.fi/pub/color/spectra/mspec/.
  16. L. T. Maloney, “Illuminant estimation as cue combination,” J. Vision 2, 493-504 (2002). [CrossRef]
  17. G. D. Finlayson and M. S. Drew, “White-point preserving color correction,” in Proceedings of IS&T/SID 5th Color Imaging Conference (Society for Imaging Science and Technology, 1997), pp. 258-261.
  18. J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” Ph.D. dissertation (Ecole Nationale Supérieure des Télécommunications, 1999).
  19. A. Abrardo, V. Cappellini, M. Cappellini, and A. Mecocci “Art-works colour calibration using the VASARI scanner,” in Proceedings of IS&T and SID's 4th Color Imaging Conference: Color Science, Systems and Applications (Society for Imaging Science and Technology,1996), pp. 94-97.
  20. S. Ratnasingam, S. Collins, and J. Hernández-Andrés, “A method for designing and assessing sensors for chromaticity constancy in high dynamic range scenes,” in Proceedings of Color Imaging Conference CIC17 (EEUU, 2009), pp 15-20.
  21. B. Fowler, “High dynamic range image sensor architectures,” High Dynamic Range Imaging Symposium and Workshop, Stanford University, California (2009), http://scien.stanford.edu/HDR/HDR_files/Conference%20Materials/Presentation%20Slides/Fowler_WDR_sensor_architectures_9_8_2009.pdf.
  22. S. E. J. Arnold, V. Savolainen, and L. Chittka, “The floral reflectance spectra database,” Nature Proceedingshttp://dx.doi.org/10.1038/npre.2008.1846.1.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited