## Illumination invariant recognition and 3D reconstruction of faces using desktop optics |

Optics Express, Vol. 19, Issue 8, pp. 7491-7506 (2011)

http://dx.doi.org/10.1364/OE.19.007491

Acrobat PDF (1376 KB)

### Abstract

We propose illumination invariant face recognition and 3D face reconstruction using desktop optics. The computer screen is used as a programmable extended light source to illuminate the face from different directions and acquire images. Features are extracted from these images and projected to multiple linear subspaces in an effort to preserve unique features rather than the most varying ones. Experiments were performed using our database of 4347 images (106 subjects), the extended Yale B and CMU-PIE databases and better results were achieved compared to the existing state-of-the-art. We also propose an efficient algorithm for reconstructing the 3D face models from three images under arbitrary illumination. The subspace coefficients of training faces are used as input patterns to train multiple Support Vector Machines (SVM) where the output labels are the subspace parameters of ground truth 3D face models. Support Vector Regression is used to learn multiple functions that map the input coefficients to the parameters of the 3D face. During testing, three images of an unknown/novel face under arbitrary illumination are used to estimate its 3D model. Quantitative results are presented using our database of 106 subjects and qualitative results are presented on the Yale B database.

© 2011 OSA

## 1. Introduction

1. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. **35**(4), 399–458 (2003). [CrossRef]

2. M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. **3**, 71–86 (1991). [CrossRef]

3. P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. **19**, 711–720 (1997). [CrossRef]

4. L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. **19**(7), 775–779 (1997). [CrossRef]

1. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. **35**(4), 399–458 (2003). [CrossRef]

9. M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. **14**(12), 2091–2106 (2005). [CrossRef] [PubMed]

## 2. Literature review

### 2.1. Illumination invariant face recognition

11. P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision **28**(3), 245–260 (1998). [CrossRef]

12. A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. **6**(23), 643–660 (2001). [CrossRef]

13. P. Hallinan, “A low-dimensional representation of human faces for arbitrary lighting conditions,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 1994), pp. 995–999. [CrossRef]

14. R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. **25**(2), 218–233 (2003). [CrossRef]

*virtual*lighting conditions such that the images under these illuminations are sufficient to approximate the illumination cone [15

15. K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. **27**(5), 684–698 (2005). [CrossRef] [PubMed]

15. K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. **27**(5), 684–698 (2005). [CrossRef] [PubMed]

12. A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. **6**(23), 643–660 (2001). [CrossRef]

14. R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. **25**(2), 218–233 (2003). [CrossRef]

*physical*lighting directions for illumination invariant face recognition. However, some of the suggested light source directions [15

15. K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. **27**(5), 684–698 (2005). [CrossRef] [PubMed]

16. Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in *Proceedings of IEEE International Conference on Computer Vision* (IEEE, 2003), pp. 808–815. [CrossRef]

**27**(5), 684–698 (2005). [CrossRef] [PubMed]

17. W. R. Boukabou and A. Bouridane, “Contourlet-based feature extraction with PCA for face recognition,” in *Proceedings of NASA/ESA Conference on Adaptive Hardware and Systems* (IEEE, 2008), pp. 482–486. [CrossRef]

18. Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in *Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis* (IEEE, 2010), pp. 294–297. [CrossRef]

18. Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in *Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis* (IEEE, 2010), pp. 294–297. [CrossRef]

### 2.2. 3D face modeling using images/the computer screen as illuminant

21. D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision **47**, 7–42 (2002). [CrossRef]

22. V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. bf 25 , 1063–1074 (2003). [CrossRef]

11. P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision **28**(3), 245–260 (1998). [CrossRef]

12. A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. **6**(23), 643–660 (2001). [CrossRef]

25. J. Clark, “Photometric stereo using LCD displays,” Image Vis. Comput. **28**(4), 704–714 (2010). [CrossRef]

25. J. Clark, “Photometric stereo using LCD displays,” Image Vis. Comput. **28**(4), 704–714 (2010). [CrossRef]

**6**(23), 643–660 (2001). [CrossRef]

26. T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. **25**(12), 1615–1618 (2003). [CrossRef]

## 3. Data acquisition

27. P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vision **57**(2), 137–154 (2004). [CrossRef]

## 4. Subspace feature representation and classification

9. M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. **14**(12), 2091–2106 (2005). [CrossRef] [PubMed]

4. L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. **19**(7), 775–779 (1997). [CrossRef]

28. L. Shen and L. Bai, “A review on Gabor Wavelets for face recognition,” Pattern Anal. Appl. **19**, 273–292 (2006). [CrossRef]

9. M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. **14**(12), 2091–2106 (2005). [CrossRef] [PubMed]

**14**(12), 2091–2106 (2005). [CrossRef] [PubMed]

*i*th image (where

*i*= 1...23) at scale

*s*and orientation

*k*. The Contourlet transform has 33% inherent redundancy [9

**14**(12), 2091–2106 (2005). [CrossRef] [PubMed]

*i*∈ {1,2...23}, and

*j*= 1,2,...

*G*) represent the matrix of Contourlet coefficients of

*N*training images (under different illuminations) of

*G*subjects in the training data at the same scale

*s*and same orientation

*k*. Note that only a subset of the 23 images under different illuminations are used for training. Each column of

**A**

*contains the Contourlet coefficients of one image. The mean of the matrix is given by and the covariance matrix by The eigenvectors of*

^{sk}**C**

*are calculated by Singular Value Decomposition where the matrix*

^{sk}**U**

*contains the eigenvectors sorted according to the decreasing order of eigenvalues and the diagonal matrix*

^{sk}**S**

*contains the respective eigenvalues. Let*

^{sk}*λ*(where

_{n}*n*= 1,2,...

*N*×

*G*) represent the eigenvalues in decreasing order. We select the subspace dimension (i.e. number of eigenvectors) so as to retain 90% energy and project the Contourlet coefficients to this subspace. If

*L*eigenvectors of

**U**

*then the subspace Contourlet coefficients at scale*

^{sk}*s*and orientation

*k*are given by where

**p**is a row vector of all 1’s and equal in dimension to

*μ*. Note that

^{sk}*s*and orientation

*k*. Similar subspaces are calculated for different scales and orientations using the training data and each time, the subspace dimension is chosen so as to retain 90% energy. In our experiments, we considered three scales and a total of 15 orientations along with the low pass sub-band image. Figure 3 shows samples of a sub-band image and Contourlet coefficients at two scales and seven orientations.

*L*dimensions becomes equal. This is done by dividing the subspace coefficients by the square root of the respective eigenvalues. The normalized subspace Contourlet coefficients at three scales and 15 orientations of each image are stacked to form a matrix of feature vectors

**B**where each column is a feature vector of the concatenated subspace Contourlet coefficients of an image. The concatenated vectors may still have some redundancy therefore, these features are once again projected to a linear subspace. However, this time, the mean need not be subtracted as the features are already centered at the origin. Since the feature dimension is usually large compared to the size of the training data,

**BB**

*is very large. Moreover, at most*

^{T}*N*×

*G*− 1 orthogonal dimensions (eigenvectors and eigenvalues) can be calculated for a training data of size

*N*×

*G*. The (

*N*×

*G*)th eigenvalue is always zero. Therefore, we calculate the covariance matrix

**C**=

**B**

^{T}**B**instead and find the

*N*×

*G*− 1 dimensional subspace as follows

**AU**′) is divided by the square root of the corresponding eigenvalue so that the eigenvectors in

**U**(i.e. columns) are of unit magnitude. The last column of

**AU′**is ignored to avoid division by zero. Thus

**U**defines an

*N*×

*G*− 1 dimensional linear subspace. The feature vectors are projected to this subspace and used for classification.

**t**and

**q**are the subspace Contourlet coefficients of the target and query faces and

*n*is the subspace dimension.

## 5. 3D face reconstruction

*I*be a row matrix of three images (columns) under different illumination of the same face, we calculate

*C*=

*I*

^{T}*I*instead of the

*II*which is computationally expensive given the high dimensionality of the image. Note that the mean is not subtracted from the images such as in the PCA or eigenfaces method [2

^{T}2. M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. **3**, 71–86 (1991). [CrossRef]

*U*is the

_{i}*i*th eigenvector of

*II*and

^{T}*λ*is its

_{i}*i*th eigenvalue. U′

*. is the*

_{i}*i*th column of

*U*′. We use only the first eigenvector (i.e.

*i*= 1) for calculating features in order to save computational time. The remaining two dimensions have negligible effect on the accuracy because the data is normalized in the

*xy*dimensions. The first eigenvector of all training faces were projected to a PCA subspace. Let

*E*represent the row matrix of the first eigenvectors (

*U*

_{1}) of

*n*training faces. Thus

*E*has

*n*columns and rows equal to the number of pixels in a training image. Since

*n*is very small compared to the image size, we calculate the covariance matrix as follows: where

*E*corresponds to the

_{j}*j*th column in

*E*. where

*p*is a row vector of all 1’s,

*U*is the

_{j}*j*th eigenvector and

*λ*is the

_{j}*j*th eigenvalue of

*n*− 1 eigenvectors/eigenvalues can be estimated from

*n*sets of training images. Next

*E*is projected to the above calculated PCA subspace

*F*is a row matrix of (

*n*− 1) dimensional column vectors. We take the first 45 dimensions, corresponding to the highest eigenvalues, that retain 92.5% of the energy and normalize them so that the variation along each dimension is equal i.e. by dividing each dimension by the square root of the corresponding eigenvalue

*λ*. Next, each vector (i.e. column) is normalized to unity to form the input patterns

*x*for training Support Vector Regression.

29. J. D’Erico, “Surface fitting using Gridfit,” http://www.mathworks.com/matlabcentral/fileexchange/.

**A**of column vectors where each column represents a face. Since the number of faces is much smaller than its dimensionality, we calculate the 3D subspace using Eq. (11) to (13). We take the 13 dimensions corresponding to the largest eigenvalues (preserving 90% energy) and project the training 3D faces to this subspace using Eq. (14). Thus the 3D faces can be reconstructed up to 90% fidelity. Finally, the training faces are normalized so that the variation along each dimension is between −1 to +1.

**x**. Support Vector Regression (SVR) using RBF kernel is used to learn a non-linear function that estimates one of the 13 co-efficients of the 3D face from the input patterns. During testing, novel images of database faces and images of unseen faces are fed to the trained SVRs for estimating the 13 coefficients which are then projected back to the 3D face space to get the vector of depth values and sampling rates. This vector is used to reconstruct the 3D face to the correct scale without GBR ambiguity.

## 6. Illumination invariant face recognition results

### 6.1. Experiment 1

### 6.2. Experiment 2

**27**(5), 684–698 (2005). [CrossRef] [PubMed]

**27**(5), 684–698 (2005). [CrossRef] [PubMed]

26. T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. **25**(12), 1615–1618 (2003). [CrossRef]

### 6.3. Experiment 3

### 6.4. Timing and comparative analysis

31. T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. **28**(9), 1519–1524 (2006). [CrossRef] [PubMed]

32. X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in *Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures* (IEEE, 2007). [CrossRef]

18. Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in *Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis* (IEEE, 2010), pp. 294–297. [CrossRef]

## 7. 3D face reconstruction results

### 7.1. Experiment 4

### 7.2. Experiment 5

### 7.3. Experiment 6

## 8. Conclusion

## Acknowledgments

## References and links

1. | W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. |

2. | M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. |

3. | P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. |

4. | L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. |

5. | O. Arandjelovic and R. Cipolla, “Face recognition from video using the generic shape-illumination manifold,” in |

6. | K. Lee and D. Kriegman, “Online probabilistic appearance manifolds for video-based recognition and tracking,” in |

7. | L. Liu, Y. Wang, and T. Tan, “Online appearance model learning for video-based face recognition,” in |

8. | J. Tangelder and B. Schouten, “Learning a sparse representation from multiple still images for on-line face recognition in an unconstrained environment,” in |

9. | M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. |

10. | T. Joachims, “Making large-scale SVM learning practical,” in |

11. | P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision |

12. | A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. |

13. | P. Hallinan, “A low-dimensional representation of human faces for arbitrary lighting conditions,” in |

14. | R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. |

15. | K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. |

16. | Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in |

17. | W. R. Boukabou and A. Bouridane, “Contourlet-based feature extraction with PCA for face recognition,” in |

18. | Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in |

19. | A. Mian, “Face recognition using Contourlet transform and multidirectional illumination from a computer screen,” in |

20. | K. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Und. |

21. | D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision |

22. | V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. bf 25 , 1063–1074 (2003). [CrossRef] |

23. | G. Schindler, “Photometric stereo via computer screen lighting for real-time surface reconstruction,” in |

24. | N. Funk and Y. Yang, “Using a raster display for photometric stereo,” in |

25. | J. Clark, “Photometric stereo using LCD displays,” Image Vis. Comput. |

26. | T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. |

27. | P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vision |

28. | L. Shen and L. Bai, “A review on Gabor Wavelets for face recognition,” Pattern Anal. Appl. |

29. | J. D’Erico, “Surface fitting using Gridfit,” http://www.mathworks.com/matlabcentral/fileexchange/. |

30. | P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in |

31. | T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. |

32. | X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in |

**OCIS Codes**

(100.5010) Image processing : Pattern recognition

(150.6910) Machine vision : Three-dimensional sensing

**ToC Category:**

Image Processing

**History**

Original Manuscript: January 27, 2011

Revised Manuscript: March 6, 2011

Manuscript Accepted: March 8, 2011

Published: April 4, 2011

**Virtual Issues**

Vol. 6, Iss. 5 *Virtual Journal for Biomedical Optics*

**Citation**

Ajmal Mian, "Illumination invariant recognition and 3D reconstruction of faces using desktop optics," Opt. Express **19**, 7491-7506 (2011)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-8-7491

Sort: Year | Journal | Reset

### References

- W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv. 35(4), 399–458 (2003). [CrossRef]
- M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn. Neurosci. 3, 71–86 (1991). [CrossRef]
- P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 711–720 (1997). [CrossRef]
- L. Wiskott, J. Fellous, N. Kruger, and C. Malsgurg, “Face recognition by elastic bunch graph matching,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997). [CrossRef]
- O. Arandjelovic and R. Cipolla, “Face recognition from video using the generic shape-illumination manifold,” in Proceedings of European Conference on Computer Vision (Springer, 2006), pp. 27–40.
- K. Lee and D. Kriegman, “Online probabilistic appearance manifolds for video-based recognition and tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 852–859.
- L. Liu, Y. Wang, and T. Tan, “Online appearance model learning for video-based face recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–7.
- J. Tangelder and B. Schouten, “Learning a sparse representation from multiple still images for on-line face recognition in an unconstrained environment,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2006), pp. 10867–1090.
- M. Do and M. Vetterli, “The Contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. 14(12), 2091–2106 (2005). [CrossRef] [PubMed]
- T. Joachims, “Making large-scale SVM learning practical,” in Advances in Kernel Methods , (MIT-Press, 1999), pp. 169–184.
- P. Belhumeur and D. Kriegman, “What is the set of images of an object under all possible illumination conditions?,” Int. J. Comput. Vision 28(3), 245–260 (1998). [CrossRef]
- A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 6(23), 643–660 (2001). [CrossRef]
- P. Hallinan, “A low-dimensional representation of human faces for arbitrary lighting conditions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 995–999. [CrossRef]
- R. Basri and D. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003). [CrossRef]
- K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005). [CrossRef] [PubMed]
- Y. Schechner, S. Nayar, and P. Belhumeur, “A theory of multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2003), pp. 808–815. [CrossRef]
- W. R. Boukabou and A. Bouridane, “Contourlet-based feature extraction with PCA for face recognition,” in Proceedings of NASA/ESA Conference on Adaptive Hardware and Systems (IEEE, 2008), pp. 482–486. [CrossRef]
- Y. Huang, J. Li, G. Duan, J. Lin, D. Hu, and B. Fu, “Face recognition using illumination invariant features in Contourlet domain,” in Proceedings of International Conference on Apperceiving Computing and Intelligence Analysis (IEEE, 2010), pp. 294–297. [CrossRef]
- A. Mian, “Face recognition using Contourlet transform and multidirectional illumination from a computer screen,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Springer, 2010), pp. 332–334. [CrossRef]
- K. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Und. 101, 1–15 (2006). [CrossRef]
- D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002). [CrossRef]
- V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. bf 25 , 1063–1074 (2003). [CrossRef]
- G. Schindler, “Photometric stereo via computer screen lighting for real-time surface reconstruction,” in Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission (IEEE, 2008).
- N. Funk and Y. Yang, “Using a raster display for photometric stereo,” in Proceedings of Canadian Conference on Computer and Robot Vision (IEEE, 2007), pp. 201–207.
- J. Clark, “Photometric stereo using LCD displays,” Image Vis. Comput. 28(4), 704–714 (2010). [CrossRef]
- T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression database,” IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003). [CrossRef]
- P. Viola and M. Jones, “Robust real-time face detection,” Int. J. Comput. Vision 57(2), 137–154 (2004). [CrossRef]
- L. Shen and L. Bai, “A review on Gabor Wavelets for face recognition,” Pattern Anal. Appl. 19, 273–292 (2006). [CrossRef]
- J. D’Erico, “Surface fitting using Gridfit,” http://www.mathworks.com/matlabcentral/fileexchange/ .
- P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 947–954.
- T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1519–1524 (2006). [CrossRef] [PubMed]
- X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures (IEEE, 2007). [CrossRef]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Figures

Fig. 1 |
Fig. 2 |
Fig. 3 |

Fig. 4 |
Fig. 5 |
Fig. 6 |

Fig. 7 |
Fig. 8 |
Fig. 9 |

Fig. 10 |
Fig. 11 |
Fig. 12 |

Fig. 13 |
||

« Previous Article | Next Article »

OSA is a member of CrossRef.