## Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging

Optics Express, Vol. 18, Issue 11, pp. 12002-12016 (2010)

http://dx.doi.org/10.1364/OE.18.012002

Acrobat PDF (5227 KB)

### Abstract

We propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice lines by projective image transformation. The information of distortion in the acquired elemental image set is found by Hough transform algorithm. With this initial information of distortions, the acquired elemental image set is rectified automatically without the prior knowledge on the characteristics of pickup system by stratified image transformation procedure. Computer-generated elemental image sets with distortion on purpose are used for verifying the proposed rectification method. Experimentally-captured elemental image sets are optically reconstructed before and after the rectification by the proposed method. The experimental results support the validity of the proposed method with high accuracy of image rectification and lattice extraction.

© 2010 OSA

## 1. Introduction

5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. **36**(7), 1598–1603 (1997). [CrossRef] [PubMed]

6. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. **43**(25), 4882–4895 (2004). [CrossRef] [PubMed]

9. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea **12**(3), 131–135 (2008). [CrossRef]

10. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. **33**(7), 684–686 (2008). [CrossRef] [PubMed]

11. J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A **21**(6), 951–958 (2004). [CrossRef]

*et al.*used Hough transform for finding a tilt angle of the lens array to correct the rotational distortion [12

12. A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. **2**(4), 393–400 (2006). [CrossRef]

13. N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express **14**(22), 10403–10409 (2006). [CrossRef] [PubMed]

*et al.*attached surface markers on the lens array in order to find information on the geometrical distortion in the elemental image set and applied a linear transformation to correct the distortion [14

14. J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express **17**(20), 18026–18037 (2009). [CrossRef] [PubMed]

## 2. Proposed method for the projective image transformation

**x**on the CCD plane and points

**x′**on the focal plane of the lens array, the projective transform relation is represented bywhere

**H′**is the projective transformation matrix, and

**x**and

**x′**are represented by using homogeneous coordinates. Note that barrel or pincushion distortion caused by the aberration of the lens array is ignored in Eq. (1). The purpose of the proposed method is to find undistorted elemental image points

**x′**from the acquired distorted elemental image points

**x**. It is known that an inverse of the projective transformation matrix is also a projective transformation matrix [17]. Therefore, with

**H**= (

**H′**)

^{−1}, the main objective is to find the projective transformation matrix

**H**that satisfies

**x**and

**x′**are known, the projective transformation can be directly estimated and thus the acquired image can be rectified [17]. In this paper, however, stratification of the projective transform is used instead of direct estimation to use length and angle information in the transformation instead of the known points.

**H**,

_{s}**H**, and

_{a}**H**denoting the similarity, affine and pure-projective transformation matrixes, respectively, the projective transformation matrix

_{p}**H**is decomposed into where

*s*is the isotropic scaling,

**R**the rotation matrix,

**t**the translation vector,

**H**is estimated by recovering the three unit transforms,

**H**,

_{s}**H**, and

_{a}**H**, sequentially. First, pure-projective distortion is removed by applying

_{p}**H**. A vanishing line

_{p}**H**. Next, affine distortion is removed by applying

_{p}**H**. The parameters

_{a}*α*and

*β*in

**H**are estimated using prior knowledge on the lens array shape, i.e., length ratio and angle between two intersected boundaries of the lens array. Finally, the similarity transform is recovered by applying

_{a}**H**that is estimated by detecting the rotation angle and elemental image size. The recovery of the translation

_{s}**t**is ignored in the proposed rectification algorithm. Figure 2 illustrates the proposed stratification of a projective-transform elemental image set.

### 2.1 Preprocessing: detection of initial distortion information by Hough transform

18. J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. **16**(1), 253–261 (2007). [CrossRef] [PubMed]

### 2.2 Correction of pure-projective distortion

**H**in Eq. (4). Actually, the vanishing points are formed by intersections of straight lines, called vanishing line

_{p}**H**can be represented by using these vanishing lines, as described in Eq. (4). Consequently, the recovery of the affine property or the correction of the pure-projective distortion can be performed by detecting the vanishing line in the distorted image. When the lens array consists of rectangular lenses, the lens lattice is horizontally and vertically parallel to one another in the world coordinates. This parallel lens lattice is distorted to converge at vanishing points in the acquired elemental image set. Since the lens lattice is detected in the previous step as shown in Fig. 3(c), it is possible to calculate the converging points, i.e., vanishing points, and thus the vanishing line

_{p}**H**is calculated by Eq. (4), and finally the pure-projective distortion is corrected by applying

_{p}**H**to the acquired elemental image set. The corrected elemental image set rectified from the pure-projective distortion appeared in Fig. 3(c) is shown in Fig. 4(a) . It is observed in Fig. 4(a) that the transverse and longitudinal peak lines are now parallel to each other, confirming the recovery of the affine geometry property of parallelism.

_{p}### 2.3 Correction of affine distortion

**H**in Eq. (4) and applying this matrix to the affine image, or the corrected image of pure-projective distortion, which was obtained in the previous step. The transformation matrix

_{a}**H**has two degrees of freedom which are described by parameters

_{a}*α*and

*β*. These parameters define the image of the circular points on the metric geometric plane. Since

**H**has two degrees of freedom, two independent constraints are required to estimate

_{a}**H**. In this paper, a known length ratio and angle between two lines are used as constraints. The lens array is again assumed to be composed of square lenses. Horizontal and vertical lens boundary lines are always perpendicular to each other in the world coordinates. The length ratio between horizontal and vertical lines can also be calculated from the number of lenses and the length of each rectangular lens.

_{a}*l*

_{1}connecting two points (

*x*

_{1},

*y*

_{1}) and (

*x*

_{3},

*y*

_{3}) and

*l*

_{2}connecting (

*x*

_{1},

*y*

_{1}) and (

*x*

_{2},

*y*

_{2}), represent the lens boundaries in this figure, the angle between them is 90° in the world coordinates. From this known angle constraints, it can be easily verified that the parameters

*α*and

*β*in

**H**make a constraint circle with a center pointand a radiuswhere

_{a}*d*

_{1}= (

*x*

_{1}-

*x*

_{2})/(

*y*

_{1}-

*y*

_{2}) = Δ

*x*

_{1}/Δ

*y*

_{1}and

*d*

_{2}= (

*x*

_{1}-

*x*

_{3})/(

*y*

_{1}-

*y*

_{3}) = Δ

*x*

_{2}/Δ

*y*

_{2}.

*l*

_{1}and

*l*

_{2}is known to be

*r*, the parameters

*α*and

*β*makes another circle with a center pointand a radius

*r*can be determined if the number of lenses on the detected lines

*l*

_{1}and

*l*

_{2}is estimated. In order to find the number of lenses, a projective profile of the elemental image set obtained by compensation of pure-projective distortion in Fig. 4(a) is calculated after Canny edge detection and median filtering. Since the lens boundary is outlined by high edge values, this projective profile is expected to have high peak values where the lens boundaries are located. The affine property was already restored in the previous step. Hence, it can be assumed that the two pairs of lens boundaries are parallel to each other and have a uniform spacing. The number of lenses or the size of the lens in the compensated elemental image set can be estimated by maximizing the multiplication of the projective profile by an impulse train while varying the interval and offset of the impulse train. Also, interpolated projected profiles can be used for improving the sub-pixel accuracy. In the proposed system, the projective profile is interpolated by a factor of 10, minimizing errors from a discrete pixel structure. The projective profile of the compensated elemental image set along the longitudinal direction,

*l*

_{2}direction, is shown in Fig. 5(a) . An impulse train of detected lens sizes is shown in Fig. 5(b) along with the projective profile interpolated by a factor of 10 for the longitudinal direction. Using the detected lens size, the number of lenses on the line

*l*

_{1}and

*l*

_{2}in Fig. 4(a) can be calculated and the length ratio

*r*is obtained.

*α*and

*β*are obtained by finding the intersections of these two circles, and thus

**H**is calculated by Eq. (4). Figure 4(b) shows the elemental image set resulted from correction of the affine distortion, which is obtained by applying

_{a}**H**to the affine elemental image set, i.e., the elemental image set compensated for pure-projective distortion, in Fig. 4(a). Figure 4(b) apparently shows that the angle information is recovered and the resultant image has metric geometry properties.

_{a}### 2.4 Correction of rotation and scale distortions along with extraction of lens lattice

**H**in Eq. (4). Since the affine property was recovered in the previous step, the angle and length ratio between the adjacent side lines are considered to be corrected. Skew angle is calculated by using the line direction vectors. The rotation matrix

_{s}**R**in

**H**is known from the calculated skew angle. The scaling factor

_{s}*s*can also be determined from the lengths of the lines and the number of lenses on those lines so that the length of the lens is an integer multiple of the CCD pixel pitch. Figure 6 shows the elemental image set obtained by correction of rotation and scale distortions using the similarity transform matrix

**H**.

_{s}**H**,

_{p}**H**, and

_{a}**H**, respectively. PSNR between the final elemental image sets rectified by the simulated and extracted image transforms is 25.67 dB. This result shows that the error of the proposed method occurs in each procedure of the stratified transforms. However, from the PSNR result of the final rectified elemental image sets, it is confirmed that the proposed method can extract each transformation matrix with reasonably high accuracy.

_{s}## 3. Experiments on rectified elemental and optically reconstructed images of real objects

*d*

_{object}is a distance of the object from the lens array and

*θ*

_{x}and

*θ*

_{y}are the rotation angle of the camera from the

*x*-axis and

*y*-axis, respectively. In the experiment for the textured box,

*d*

_{object}is 40 mm and both

*θ*

_{x}and

*θ*

_{y}are 7°, causing mismatch between the lens array and the camera CCD plane. In the experiment for the letter ‘2’ candle,

*d*

_{object}is 30 mm and

*θ*

_{x}and

*θ*

_{y}are 2° and 3°, respectively. These two objects used in the experiments are shown in Fig. 8(a) , and their elemental image sets optically captured with geometrical distortions are shown in Fig. 8(b). The sizes of two acquired elemental image sets which pickup the object letter ‘2’ candle and textured box are 2247 by 2271 pixels and 2400 by 2300 pixels respectively. Lens distortions such as barrel or pincushion distortion in the acquired elemental image set are corrected by commercial software ‘PTlens’ before the proposed rectification method is applied [20].

**H**,

_{p}**H**, and

_{a}**H**extracted by the proposed method are summarized in Table 2 for the texture box images and in Table 3 for the letter ‘2’ candle images, respectively. We used MATLAB 7.8 for the implementation of the proposed rectification algorithm on a 2.40-GHz Core2 personal computer with 4GB of RAM. The computational times for the rectification are 56.38 seconds for letter ‘2’ candle and 62.67 seconds for textured box. From Figs. 9 and 10, it can be observed that the geometrically distorted elemental image sets are rectified effectively and the lens boundary lattices are accurately extracted by the proposed method of the projective image transformation.

_{s}## 4. Conclusion

## Acknowledgment

## References and links

1. | G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. |

2. | B. Lee, J.-H. Park, and S.-W. Min, |

3. | J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. |

4. | M. C. Forman, N. Davies, and M. McCormick, “Continuous parallax in discrete pixilated integral three-dimensional displays,” J. Opt. Soc. Am. A |

5. | F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. |

6. | J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. |

7. | J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. |

8. | G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. |

9. | D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea |

10. | M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. |

11. | J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A |

12. | A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. |

13. | N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express |

14. | J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express |

15. | R. C. Gonzalez, R. E. Woods, and S. L. Eddins, |

16. | D. Liebowitz, and A. Zisserman, “Metric Rectification for Perspective Images of Planes,” in |

17. | R. Hartley, and A. Zisserman, |

18. | J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. |

19. | J. F. Canny, “A computational approach for edge detection,” Trans. Pat. Anal. Mach. Intell. 8, 679–698 (1986). |

20. |

**OCIS Codes**

(100.6890) Image processing : Three-dimensional image processing

(110.2990) Imaging systems : Image formation theory

(110.6880) Imaging systems : Three-dimensional image acquisition

**ToC Category:**

Image Processing

**History**

Original Manuscript: April 2, 2010

Revised Manuscript: May 12, 2010

Manuscript Accepted: May 13, 2010

Published: May 21, 2010

**Citation**

Keehoon Hong, Jisoo Hong, Jae-Hyun Jung, Jae-Hyeung Park, and Byoungho Lee, "Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging," Opt. Express **18**, 12002-12016 (2010)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-11-12002

Sort: Year | Journal | Reset

### References

- G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).
- B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.
- J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]
- M. C. Forman, N. Davies, and M. McCormick, “Continuous parallax in discrete pixilated integral three-dimensional displays,” J. Opt. Soc. Am. A 20(3), 411–420 (2003). [CrossRef]
- F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]
- J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004). [CrossRef] [PubMed]
- J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).
- G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007). [CrossRef] [PubMed]
- D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008). [CrossRef]
- M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008). [CrossRef] [PubMed]
- J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21(6), 951–958 (2004). [CrossRef]
- A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. 2(4), 393–400 (2006). [CrossRef]
- N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express 14(22), 10403–10409 (2006). [CrossRef] [PubMed]
- J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express 17(20), 18026–18037 (2009). [CrossRef] [PubMed]
- R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing using MATLAB (Prentice Hall, 2004), Chap. 10.
- D. Liebowitz, and A. Zisserman, “Metric Rectification for Perspective Images of Planes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Santa Barbara, CA, USA, June 23–25, 1998), p.482.
- R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, second ed. (Cambridge University Press, Cambridge, 2000).
- J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007). [CrossRef] [PubMed]
- J. F. Canny, “A computational approach for edge detection,” Trans. Pat. Anal. Mach. Intell. 8, 679–698 (1986).
- http://epaperpress.com/ptlens/ .

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.