OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 18, Iss. 11 — May. 24, 2010
  • pp: 12002–12016
« Show journal navigation

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging

Keehoon Hong, Jisoo Hong, Jae-Hyun Jung, Jae-Hyeung Park, and Byoungho Lee  »View Author Affiliations


Optics Express, Vol. 18, Issue 11, pp. 12002-12016 (2010)
http://dx.doi.org/10.1364/OE.18.012002


View Full Text Article

Acrobat PDF (5227 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice lines by projective image transformation. The information of distortion in the acquired elemental image set is found by Hough transform algorithm. With this initial information of distortions, the acquired elemental image set is rectified automatically without the prior knowledge on the characteristics of pickup system by stratified image transformation procedure. Computer-generated elemental image sets with distortion on purpose are used for verifying the proposed rectification method. Experimentally-captured elemental image sets are optically reconstructed before and after the rectification by the proposed method. The experimental results support the validity of the proposed method with high accuracy of image rectification and lattice extraction.

© 2010 OSA

1. Introduction

Integral imaging is one of the autostereoscopic three-dimensional (3D) displays, which was initially formulated by Lippmann in 1908 [1

1. G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).

]. This technique produces full color 3D images with horizontal and vertical parallax. It also supports multiple simultaneous viewers with continuous viewing points. Integral imaging is composed of a pickup procedure for 3D information acquisition and a reconstruction procedure for 3D image display, as depicted in Fig. 1
Fig. 1 Integral imaging which is sequentially processed in accordance with two procedures of (a) pickup and (b) reconstruction.
. For the 3D information acquisition in the pickup procedure, an object is recorded as an elemental image set through a lens array by using an image sensor such as a charge-coupled device (CCD). In the reconstruction procedure, the elemental image set is loaded in a two-dimensional (2D) display panel and transformed into an integrated 3D image of the object through a suitable lens array [2

2. B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.

5

5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]

].

In recent years, numerous methods of 3D image data processing using elemental image sets have been introduced such as depth map calculation, object recognition and 3D structure reconstruction [6

6. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004). [CrossRef] [PubMed]

9

9. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008). [CrossRef]

]. In order to effectively use the elemental image set in 3D image data processing, 3D information of the object in the elemental image set should be acquired without distortion and loss of data information. However, in the pickup procedure of a real object, the recorded elemental image set has spatial distortions, such as geometric distortions caused by translational and rotational misalignments between the lens array and the CCD plane and barrel or pincushion distortion due to the lens aberration. Another problem in the pickup procedure is non-integer ratios between the pitches of CCD pixel and elemental lens. This mismatch makes each single elemental image have non-equal sizes, which deteriorates an accurate data processing of 3D images. Geometrical errors in the elemental image set also make a problem in the reconstruction procedure, causing a spatial distortion in the optically reconstructed 3D image [10

10. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008). [CrossRef] [PubMed]

,11

11. J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21(6), 951–958 (2004). [CrossRef]

]. Recently, numerous studies have attempted to correct geometrical distortions in elemental image sets. Aggoun and Sgouros et al. used Hough transform for finding a tilt angle of the lens array to correct the rotational distortion [12

12. A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. 2(4), 393–400 (2006). [CrossRef]

,13

13. N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express 14(22), 10403–10409 (2006). [CrossRef] [PubMed]

]. Lee et al. attached surface markers on the lens array in order to find information on the geometrical distortion in the elemental image set and applied a linear transformation to correct the distortion [14

14. J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express 17(20), 18026–18037 (2009). [CrossRef] [PubMed]

]. These two methods, however, have some limitations: the former can only correct the rotational distortion, while the latter needs prior knowledge about the pickup circumstance and has data information loss due to the screening effects of the markers attached on the lens array.

In this paper, we propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice with minimal prior knowledge on the characteristics of pickup system. The proposed method assumes that an elemental image set is inversely acquired with distortions for a projective image transformation from an undistorted, geometrically rectified elemental image set. To start with the projective image transformation, Hough transform [15

15. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing using MATLAB (Prentice Hall, 2004), Chap. 10.

] is used to find initial information on geometrical distortions in the acquired elemental image set. With this initial information of distortions, the acquired elemental image set is rectified into affine and metric images sequentially [16

16. D. Liebowitz, and A. Zisserman, “Metric Rectification for Perspective Images of Planes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Santa Barbara, CA, USA, June 23–25, 1998), p.482.

]. After rectifying distortions in the elemental image set, a similarity transform is finally applied to rearrange mismatches between the CCD pixel and lens pitches of the elemental image set and to extract the lens lattice. The proposed method is verified by comparing the transformation matrices extracted from the computer-generated elemental image set in the procedure of proposed method with those used for the generation of the distorted elemental image set on purpose. In addition, to demonstrate the validity of the proposed rectification method, the two optically reconstructed images, which are obtained from the experimentally-captured elemental image sets of two real objects in the pickup procedure, are compared before and after the rectification by the proposed method.

2. Proposed method for the projective image transformation

In the pickup procedure, an elemental image set is obtained on the focal plane in front of a lens array. When the CCD plane is misaligned with respect to this image plane, the elemental image set is mapped into a geometrically distorted image on the CCD plane. This procedure can be described using a projective transform. With points x on the CCD plane and points x′ on the focal plane of the lens array, the projective transform relation is represented by
x=Hx,
(1)
where H′ is the projective transformation matrix, and x and x′ are represented by using homogeneous coordinates. Note that barrel or pincushion distortion caused by the aberration of the lens array is ignored in Eq. (1). The purpose of the proposed method is to find undistorted elemental image points x′ from the acquired distorted elemental image points x. It is known that an inverse of the projective transformation matrix is also a projective transformation matrix [17

17. R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, second ed. (Cambridge University Press, Cambridge, 2000).

]. Therefore, with H = (H′)−1, the main objective is to find the projective transformation matrix H that satisfies

x=(H)1x=Hx.
(2)

Generally, a projective transformation matrix has 8 degrees of freedom. Hence, if four or more point correspondences between x and x′ are known, the projective transformation can be directly estimated and thus the acquired image can be rectified [17

17. R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, second ed. (Cambridge University Press, Cambridge, 2000).

]. In this paper, however, stratification of the projective transform is used instead of direct estimation to use length and angle information in the transformation instead of the known points.

The projective image transformation can be represented as a cascade process of similarity, affine and pure-projective transforms. With Hs, Ha, and Hp denoting the similarity, affine and pure-projective transformation matrixes, respectively, the projective transformation matrix H is decomposed into
H=HsHaHp,
(3)
Hs=[sRt0T1],Ha=[1/βα/β0010001],Hp=[100010l1l2l3],
(4)
where s is the isotropic scaling, R the rotation matrix, t the translation vector, (αiβ,1,0)the affine transformed circular points, andlv=(l1,l2,l3)the vanishing line in the projective-transformed image.

In this paper, the transformation matrix H is estimated by recovering the three unit transforms, Hs, Ha, and Hp, sequentially. First, pure-projective distortion is removed by applying Hp. A vanishing line lv=(l1,l2,l3), which connects vanishing points in the projective-transformed elemental image set, is detected and used to find a pure-projective transformation matrix Hp. Next, affine distortion is removed by applying Ha. The parameters α and β in Ha are estimated using prior knowledge on the lens array shape, i.e., length ratio and angle between two intersected boundaries of the lens array. Finally, the similarity transform is recovered by applying Hs that is estimated by detecting the rotation angle and elemental image size. The recovery of the translation t is ignored in the proposed rectification algorithm. Figure 2
Fig. 2 Stratification of the projective image transformation into a cascade process of pure-projective (Hp), affine (Ha), and similarity (Hs) transforms.
illustrates the proposed stratification of a projective-transform elemental image set.

2.1 Preprocessing: detection of initial distortion information by Hough transform

In the proposed method, the shape of the lens array is used as a prior knowledge. If the lens array consists of regular square lenses, the undistorted elemental image set will have a 2D pattern of the lens boundary lattice with regular squares in it. Through a CCD capturing process, this 2D lattice pattern is distorted to an elemental image set of a tetragonal shape on the CCD plane by Eq. (1). For rectification of the elemental image set, information about its geometrical distortions can be obtained by detecting a lattice pattern from the acquired distorted elemental image set. More specifically, four side lines of the tetragonal lattice are detected, and the length ratios and angles between two adjacent side lines will be utilized in the later process.

For a reliable detection of the lattice pattern, a preprocessing is performed on the acquired elemental image set. Figures 3(a)
Fig. 3 Preprocessing images of (a) an object, (b) a distorted elemental image set acquired by known projective transformation matrices, and (c) a tetragonal edge image with four peak lines of Hough transform.
and 3(b) show an example, respectively, of an object and its distorted elemental image set acquired purposely by an inverse projective image transformation from an undistorted, rectified elemental image set of the object. The acquired distorted elemental image set is converted into a gray scale image, and then segmented by using a gray-level histogram segmentation algorithm [18

18. J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007). [CrossRef] [PubMed]

] which distinguishes an object area from a background space. The boundaries between the object area and the background are outlined by applying the Canny edge detection [19

19. J. F. Canny, “A computational approach for edge detection,” Trans. Pat. Anal. Mach. Intell. 8, 679–698 (1986).

] to the segmented image. When the object and background have similar intensity profiles, edges will not be detected accurately due to the difficulties on the histogram segmentation. In this paper we assume that the object and background have different intensity profiles. In order to find a 2D lattice pattern with two transverse and two longitudinal lattice lines in the resultant edge image, we use the Hough transform method which is widely used in the image processing field for detecting straight lines in the image. For more accurate results, median filters are applied along transverse and longitudinal directions, respectively, before Hough transform is employed. In the Hough transform results, two maximum peak lines of Hough transform are selected for each of transverse and longitudinal directions. In our framework, Hough transform has a 0.1-degree accuracy and a skew angle due to the geometric distortion is assumed to remain under ±20 degrees.

Figure 3(c) shows the resultant image of the edge detection with four peak lines of Hough transform. The detected four side lines of the lens lattice will be used to find the information of pure-projective distortion from the geometrically distorted elemental image set, as described in the following section 2.2. In section 2.3, the resultant affine image set with a parallelogram shape will be corrected to a metric image set with a rectangular shape using a length ratio between the adjacent side lines of the parallelogram.

2.2 Correction of pure-projective distortion

2.3 Correction of affine distortion

Figure 4(a) shows the elemental image set obtained by correction of pure-projective distortion, which has constraint parameters for correction of affine distortion. Since two lines, l 1 connecting two points (x 1, y 1) and (x 3, y 3) and l 2 connecting (x 1, y 1) and (x 2, y 2), represent the lens boundaries in this figure, the angle between them is 90° in the world coordinates. From this known angle constraints, it can be easily verified that the parameters α and β in Ha make a constraint circle with a center point
(cα,cβ)=(d1+d22,0),
(5)
and a radius
R=|d1d22|,
(6)
where d 1 = (x 1-x 2)/(y 1-y 2) = Δx 1y 1 and d 2 = (x 1-x 3)/(y 1-y 3) = Δx 2y 2.

If the length ratio in the world coordinates of two lines l 1 and l 2 is known to be r, the parameters α and β makes another circle with a center point
(cα,cβ)=(Δx2Δy2r2Δx1Δy1Δy22r2Δy12,0),
(7)
and a radius

R=|r(Δx1Δy2Δx2Δy1)Δy22r2Δy12|.
(8)

Now we know two constraint circles obtained from known angle and length ratio constraints. The parameters α and β are obtained by finding the intersections of these two circles, and thus Ha is calculated by Eq. (4). Figure 4(b) shows the elemental image set resulted from correction of the affine distortion, which is obtained by applying Ha to the affine elemental image set, i.e., the elemental image set compensated for pure-projective distortion, in Fig. 4(a). Figure 4(b) apparently shows that the angle information is recovered and the resultant image has metric geometry properties.

2.4 Correction of rotation and scale distortions along with extraction of lens lattice

The geometric distortion of the original elemental image set appeared in Fig. 3(b) has been successfully removed by the proposed method of projective image transformation, as shown in Fig. 6. This figure also indicates that the detected lens lattices are located accurately on the boundary of the elemental images, enabling exact identification of each elemental image.

The transformation matrices extracted by the proposed method are listed in Table 1

Table 1. Comparison of the transformation matrices between the simulated and extracted results

table-icon
View This Table
| View All Tables
in comparison with those simulated in generation of the distorted elemental image set. We measured peak signal to noise ratio (PSNR) between elemental image sets transformed by using simulated image transformation matrix and extracted image transformation matrix in Table 1. The results of PSNR are 27.59 dB, 22.77 dB, and 20.98 dB for three steps Hp, Ha, and Hs, respectively. PSNR between the final elemental image sets rectified by the simulated and extracted image transforms is 25.67 dB. This result shows that the error of the proposed method occurs in each procedure of the stratified transforms. However, from the PSNR result of the final rectified elemental image sets, it is confirmed that the proposed method can extract each transformation matrix with reasonably high accuracy.

3. Experiments on rectified elemental and optically reconstructed images of real objects

In order to evaluate the proposed method of the projective image transformation for real pickup images of the objects, we have performed the experiments by using the elemental image sets which were optically picked up from the two different 3D objects which are textured box and letter ‘2’ candle of 50 mm and 30 mm size, respectively. In both the experiments, we used the lens array consisting of elemental lenses in regular square shape with a 1 mm lens pitch. Figure 7
Fig. 7 Photograph and optical arrangement of the experimental setup for picking up the 3D object images.
shows the photograph and optical arrangement of the experimental setup to optically pickup the elemental image set of the object. In this figure, d object is a distance of the object from the lens array and θ x and θ y are the rotation angle of the camera from the x-axis and y-axis, respectively. In the experiment for the textured box, d object is 40 mm and both θ x and θ y are 7°, causing mismatch between the lens array and the camera CCD plane. In the experiment for the letter ‘2’ candle, d object is 30 mm and θ x and θ y are 2° and 3°, respectively. These two objects used in the experiments are shown in Fig. 8(a)
Fig. 8 (a) 3D objects of a textured box (up) and a letter ‘2’ candle (down) used in the experiments, and (b) their corresponding elemental image sets optically captured with geometrical distortions on the CCD plane.
, and their elemental image sets optically captured with geometrical distortions are shown in Fig. 8(b). The sizes of two acquired elemental image sets which pickup the object letter ‘2’ candle and textured box are 2247 by 2271 pixels and 2400 by 2300 pixels respectively. Lens distortions such as barrel or pincushion distortion in the acquired elemental image set are corrected by commercial software ‘PTlens’ before the proposed rectification method is applied [20].

The transform results of the elemental image sets, which are corrected from the experimentally acquired ones in Fig. 8(b) by the projective image transformation, are sequentially presented in Fig. 9
Fig. 9 Sequential corrections of distortions in the pickup procedure for the elemental image set captured optically from a textured box object: (a) initial distortion detection with Hough transform peaks, (b) affine geometry recovered image, (c) metric geometry recovered image, and (d) extracted lens lattice lines in the close-up image of an inserted upper left corner in (c).
for the texture box and in Fig. 10
Fig. 10 Sequential corrections of distortions in the pickup procedure for the elemental image set captured optically from a letter ‘2’ candle object: (a) initial distortion detection with Hough transform peaks, (b) affine geometry recovered image, (c) metric geometry recovered image, and (d) extracted lens lattice lines in the close-up image of an inserted upper left corner in (c).
for the letter ‘2’ candle, respectively. Both the figures illustrate the initial detection of the distorted lens lattice using the Hough transform peaks, the results of sequential recovery of affine and metric geometry, and finally the rectified elemental image set with an extracted lattice structure, respectively. The transform matrices Hp, Ha, and Hs extracted by the proposed method are summarized in Table 2

Table 2. Transform matrices extracted for the textured box images in the first experiment.

table-icon
View This Table
| View All Tables
for the texture box images and in Table 3

Table 3. Transform matrices extracted for the letter ‘2’ candle images in the second experiment.

table-icon
View This Table
| View All Tables
for the letter ‘2’ candle images, respectively. We used MATLAB 7.8 for the implementation of the proposed rectification algorithm on a 2.40-GHz Core2 personal computer with 4GB of RAM. The computational times for the rectification are 56.38 seconds for letter ‘2’ candle and 62.67 seconds for textured box. From Figs. 9 and 10, it can be observed that the geometrically distorted elemental image sets are rectified effectively and the lens boundary lattices are accurately extracted by the proposed method of the projective image transformation.

In order to confirm the effectiveness of the proposed transformation method for the correct rectification of the elemental image set of real objects, the rectified elemental image sets in Figs. 9(c) and 10(c) are optically reconstructed in the reconstruction procedure by using the lens arrays which have same lattice structure in Figs. 9(d) and 10(d). The resultant 3D images optically reconstructed for the two objects are shown in Fig. 11(b)
Fig. 11 Optically reconstructed 3D images (a) from the distorted elemental image sets and (b) from the elemental image sets rectified by the proposed method.
. For comparison, Fig. 11(a) also shows the 3D images optically reconstructed in the same way from the original distorted elemental image sets in Fig. 8(b). As appeared in Fig. 11, the reconstructed 3D images using the original elemental image sets with distortions have spatial deforms and blurs. On the other hand, the 3D images reconstructed using the elemental image set rectified by the proposed method have no spatial distortions and deforms, showing quite clear images of the two objects used in the experiments.

4. Conclusion

In this paper, we have proposed a new method for rectification of the geometrically distorted elemental image set and extraction of the lens boundary lattice structure. Since the distortion information in the elemental image set is found by Hough transform algorithm, the proposed method of the projective image transformation can rectify the geometrical distortion without the prior knowledge on the characteristics of the pickup system. By way of the stratified procedure of the image transformation for rectification, the proposed method can successfully recover the geometrical distortions in consecutive order. The transformation matrices extracted by the procedure of proposed method turn out to be in good agreements with those used for the generation of the computer-generated elemental image set with distortions. The experimental results for the optically-captured elemental image sets of the real 3D objects support the validity of the proposed method with high accuracy of image rectification and lattice extraction as well as reasonably high definition of their reconstructed 3D images.

Acknowledgment

This research was supported by Basic Research Program through the National Research Foundation (NRF) of Korea funded by the Ministry of Education, Science and Technology (2009-0088705).

References and links

1.

G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).

2.

B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.

3.

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]

4.

M. C. Forman, N. Davies, and M. McCormick, “Continuous parallax in discrete pixilated integral three-dimensional displays,” J. Opt. Soc. Am. A 20(3), 411–420 (2003). [CrossRef]

5.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]

6.

J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004). [CrossRef] [PubMed]

7.

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).

8.

G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007). [CrossRef] [PubMed]

9.

D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008). [CrossRef]

10.

M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008). [CrossRef] [PubMed]

11.

J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21(6), 951–958 (2004). [CrossRef]

12.

A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. 2(4), 393–400 (2006). [CrossRef]

13.

N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express 14(22), 10403–10409 (2006). [CrossRef] [PubMed]

14.

J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express 17(20), 18026–18037 (2009). [CrossRef] [PubMed]

15.

R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing using MATLAB (Prentice Hall, 2004), Chap. 10.

16.

D. Liebowitz, and A. Zisserman, “Metric Rectification for Perspective Images of Planes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Santa Barbara, CA, USA, June 23–25, 1998), p.482.

17.

R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, second ed. (Cambridge University Press, Cambridge, 2000).

18.

J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007). [CrossRef] [PubMed]

19.

J. F. Canny, “A computational approach for edge detection,” Trans. Pat. Anal. Mach. Intell. 8, 679–698 (1986).

20.

http://epaperpress.com/ptlens/.

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(110.2990) Imaging systems : Image formation theory
(110.6880) Imaging systems : Three-dimensional image acquisition

ToC Category:
Image Processing

History
Original Manuscript: April 2, 2010
Revised Manuscript: May 12, 2010
Manuscript Accepted: May 13, 2010
Published: May 21, 2010

Citation
Keehoon Hong, Jisoo Hong, Jae-Hyun Jung, Jae-Hyeung Park, and Byoungho Lee, "Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging," Opt. Express 18, 12002-12016 (2010)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-11-12002


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).
  2. B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.
  3. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]
  4. M. C. Forman, N. Davies, and M. McCormick, “Continuous parallax in discrete pixilated integral three-dimensional displays,” J. Opt. Soc. Am. A 20(3), 411–420 (2003). [CrossRef]
  5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]
  6. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004). [CrossRef] [PubMed]
  7. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).
  8. G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007). [CrossRef] [PubMed]
  9. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008). [CrossRef]
  10. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008). [CrossRef] [PubMed]
  11. J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21(6), 951–958 (2004). [CrossRef]
  12. A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. 2(4), 393–400 (2006). [CrossRef]
  13. N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express 14(22), 10403–10409 (2006). [CrossRef] [PubMed]
  14. J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express 17(20), 18026–18037 (2009). [CrossRef] [PubMed]
  15. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing using MATLAB (Prentice Hall, 2004), Chap. 10.
  16. D. Liebowitz, and A. Zisserman, “Metric Rectification for Perspective Images of Planes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Santa Barbara, CA, USA, June 23–25, 1998), p.482.
  17. R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, second ed. (Cambridge University Press, Cambridge, 2000).
  18. J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007). [CrossRef] [PubMed]
  19. J. F. Canny, “A computational approach for edge detection,” Trans. Pat. Anal. Mach. Intell. 8, 679–698 (1986).
  20. http://epaperpress.com/ptlens/ .

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited