## Scale-variant magnification for computational integral imaging and its application to 3D object correlator

Optics Express, Vol. 16, Issue 12, pp. 8855-8867 (2008)

http://dx.doi.org/10.1364/OE.16.008855

Acrobat PDF (1132 KB)

### Abstract

In this paper, we present a novel volumetric computational reconstruction (VCR) method for improved 3D object correlator. Basically, VCR consists of magnification and superposition. This paper presents new scale-variant magnification as a technique for VCR. To introduce our technique, we discuss an interference problem among elemental images in VCR. We find that a large magnification causes interference among elemental images when they are applied to the superposition. Thus, the resolution of reconstructed images should be limited by this interference. To overcome the interference problem, we propose a method to calculate a minimum magnification factor while VCR is still valid. Magnification by a new factor enables the proposed method to reconstruct resolution-enhanced images. To confirm the feasibility of the proposed method, we apply our method to a VCR-based 3D object correlator. Experimental results indicate that our method outperforms the conventional VCR method.

© 2008 Optical Society of America

## 1. Introduction

2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. **38**, 1072–1077 (1999). [CrossRef]

7. S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

7. S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

9. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express **12**, 4579–4588 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-19-4579. [CrossRef] [PubMed]

7. S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

**12**, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

## 2. Principle of VCR

*z*. We call the output plane as the reconstructed output plane (ROP) in this paper. The inversely projected elemental image is digitally magnified by a magnification factor

*M*=

*z*/g, where

*z*is the distance between the virtual pinhole array and ROP and

*g*is the distance between the pinhole array and the elemental image plane, respectively. Second, the magnified elemental images are superimposed on ROP as shown in Fig. 1(c). Basically, VCR employs a computer to construct plane images. This means that elemental images are treated as a discrete signal. Now, let us define the sampling interval of input elemental images as

*d*. Then, the number of samples of the magnified elemental images must increase to keep the sampling interval of the magnified elemental images in ROP to be

*d*. This implies that the sampling interval of ROP is to be

*d*regardless of the distance

*z*. This magnification provides that the physical properties such as the object size and length in ROP can be same as those of the original image plane. Note that one can choose another sampling interval in ROP (resampling in ROP not in elemental images). That is, one can choose a wider sampling interval or a shorter one. However, the change of sampling interval results in mismatch between the size of original objects and reconstructed objects. Also, choosing a wider sampling interval can cause information loss inevitably and choosing a shorter interval only increases the computational load without enhancing image quality. Thus, one should understand the facts before changing the sampling interval. To completely reconstruct a plane image on ROP at distance

*z*, magnification and overlapping must be repeatedly conducted with the entire elemental images. Finally, normalization with respect to the plane image on ROP is required to eliminate the granular noise [9–10

9. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express **12**, 4579–4588 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-19-4579. [CrossRef] [PubMed]

*z*-direction.

## 3. Proposed VCR method using SVM

*M*(

_{d}*z*) to reduce interference between adjacent pixels in the superposition process. That is, in the proposed method, each pixel of elemental images is magnified by a factor of

*M*, so that the number of the magnified pixels is to be

_{d}*M*and the total size of the magnified pixels is to be

_{d}*d*×

*M*as shown in Fig. 1(b). In most cases,

_{d}*M*is much smaller than the conventional magnification factor. Thus, our method uses less magnified elemental images. This is illustrated in Fig. 1(d). It is seen that less magnification factor provides us less number of overlapped elemental images on ROP. When the magnified pixels of elemental images are accumulated on ROP, empty spaces (blank pixels) disappear as all elemental images are superimposed. Thus there is no empty space in ROP and our method can minimize the interference by reducing the overlapped area of elemental images.

_{d}*M*=

*z*/g and then the magnified pixels are superimposed on ROP, as shown in Fig. 2(c). In this case, the reconstructed image of the point object can be obtained on the ROP at the position where the point object is placed. The superposition of the magnified pixels fills the empty space smoothly. However, effective pixels are interfered with each other due to the large magnification. Note that effective pixels represent the component pixels of 3D objects and have the exact intensity value of 3D objects. Thus, they should not be overlapped to obtain a high resolution image.

*M*(

_{d}*z*) with respect to the distance

*z*in order that the magnified pixels are unable to interfere their adjacent effective pixels. This implies that the factor

*M*(

_{d}*z*) can be determined by the distance between the two nearest effective pixels. Figure 2(d) shows our method of determining the new factor of

*M*(

_{d}*z*).

*M*(

_{d}*z*) with respect to

*z*, we consider a ray diagram that shows rays emanating from the elemental images and passing through the pinholes, as shown in Fig. 3(a). In order to formulate the relation between elemental images and a plane image on ROP, we use a ray analysis using the ABCD matrix [6

6. D. -H. Shin, B. Lee, and E. -S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. **45**, 7375–7381 (2006). [CrossRef] [PubMed]

*N*pixels. Let us denote the index of pixels in each elemental image region by

*n*=1,2,…,

*N*and the index of elemental images and corresponding pinholes by

*k*=1,2,…,

*K*. Now, consider a ray emanating from the

*n*-th pixel of the

*k*-th elemental image and passing through the

*k*-th pinhole. When the ray reaches the ROP at

*z*, it is considered to be a point (vector) on the output plane at

*z*. Figure 3(a) shows the situation. Here, we define the vertical component (or height) of the point vector (or ray on ROP) by

*p*is the size of each elemental image.

*n*

_{1}-th pixel of the

*k*

_{1}-th elemental image and the second ray comes from the

*n*

_{2}-th pixel of the

*k*

_{2}-th elemental image. Then the vertical distance between two points (two rays on ROP) is given by

*M*is calculated by a minimization algorithm so that it is hard to provide an analytical formula like the conventional method. However, the determination process of the SVM factor is simple. Figure 4 shows the proposed minimization process for the SVM factor. The basic concept of the proposed minimization process is that our method calculates all possible values of

_{d}*α*(except

*α*=0) and then chooses the minimum among the values. Refer Fig. 4 for the detail part of the minimization process.

*M*with respect to distance, where

_{d}*N*=34 and

*K*=30. This result indicates that

*M*is much smaller than conventional magnification factor

_{d}*M*except for

*z*=51 mm. The exceptional case is where many rays are overlapped at the same point. Another characteristic is that

*M*is unit. This implies that the effective pixels cover the entire region of ROP. Consequently, our method sometimes does not require a magnification process because a perfect plane image on ROP is obtained by the pixel-to-pixel mapping. We call the condition

_{d}*M*=1 as perfect coverage. Figure 3(b) shows that there are many cases of perfect coverage. A plane image on ROP improves in terms of visual quality when the perfect coverage condition is satisfied. In next chapter, we will discuss experimental results including the improvement by the perfect coverage condition.

_{d}## 4. Experimental results

### 4.1 Computational experiments

*z*=0 mm. The interval between pinholes is 1.08 mm and the gap g between the elemental images and the pinhole array is 3 mm. Three images named Lena, Car, and Cow are used as test images, as shown in Fig. 5(b). The size of each image is 1020×1020 pixels. After a test image is located at distance

*z*, its elemental images are synthesized by the computational pickup based on the simple ray geometric analysis [6

6. D. -H. Shin, B. Lee, and E. -S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. **45**, 7375–7381 (2006). [CrossRef] [PubMed]

*N*=34 and

*K*=30), elemental images are magnified by a factor of

*M*(

_{d}*z*), as shown in Fig. 3(b), and are superimposed on the ROP at

*z*to reconstruct a plane image. Finally, the normalization process is applied to the reconstructed plane image to eliminate the granular noise [9–10

9. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express **12**, 4579–4588 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-19-4579. [CrossRef] [PubMed]

*R*(

_{z}*x*,

*y*), where

*z*is the distance between the reconstructed plane image and the pinhole array.

*O*(

*x*,

*y*) and its reconstructed plane image

*R*(

_{z}*x*,

*y*). The MSE is defined as

*x*and

*y*are the pixel coordinates of images of size

*u*×

*v*. For the three test images, the average MSE is calculated according to the distance

*z*. The MSE results are presented in Fig. 6. As

*z*increases, the MSE of the conventional method increases due to the increasing interference for a large value

*M*. However, our method uses

*M*(

_{d}*z*) in place of

*M*. The MSE of our method is much less than that of the conventional method. Thus Fig. 6 indicates that the proposed method improves image quality when it is compared with the conventional method.

*z*=21 mm and

*z*=51 mm by the conventional method and the proposed method, respectively. Referring to Fig. 6, the plane image that is reconstructed by the proposed method on ROP of the distance

*z*=21mm has zero MSE. Consequently, it is easily seen that the visual quality of the plane image that is reconstructed on ROP of

*z*=21mm by our method outperforms that of the plane image that is reconstructed by the conventional method. On the other hand, the plane image that is reconstructed on ROP of the distance

*z*=51mm by our method has the same MSE as the plane image that is reconstructed by the conventional method does. It is seen that the visual quality of the two reconstructed images are same as each other. Therefore, with the two results of Fig. 6 and 7, we can state that the worst performance of our method is same as the performance of the conventional method. Also, we can say that our method can provide a perfect reconstruction of the original test image, which is impossible in the conventional method.

### 4.2 Computational experiments for 3D object correlator

*z*=30 mm and elemental images of the test images are captured through a pinhole array by using the computer-generated pickup method. The pinhole array used in these experiments consists of 30×30 pinholes and its pinhole interval is 1.08 mm. The synthesized elemental images have 1020×1020 pixels and each elemental image is composed of 34×34 pixels. To obtain templates for object correlation, the elemental images of the test images are applied to both the conventional and the proposed VCR method. And then the input signal objects are tested at various distances. That is, they are located on ROP of distance

*z*that increases by a factor of 3 mm. The pickup in Fig. 8(a) captures them as the elemental images of the input signal objects. These captured elemental images are applied to the two VCR methods including our method.

*O*(

*x*,

*y*) and its reconstructed plane image

*R*(

_{z}*x*,

*y*) is defined as

*O*(

*x*,

*y*) and

*R*(

_{z}*x*,

*y*) of distance

*z*and find out the maximum of the correlation coefficients, which is called as the correlation peak of distance

*z*and is denoted by

*C*(

_{peak}*z*) in this paper. Consequently, it is seen that the correlation peak

*C*(

_{peak}*z*) is a function of distance

*z*and the behavior of

*C*(

_{peak}*z*) such as sharpness can be a performance measure for 3D object recognition. Figure 9 shows two curves of

*C*(

_{peak}*z*) of the conventional method and the proposed method. Each

*C*(

_{peak}*z*) represents the average of correlation peaks for three test images. The results of Fig. 9 indicates that the highest correlation peak occur at

*z*=30 mm where the test image is originally located for the both methods. However, it is seen that the proposed method provides higher sharpness of

*C*(

_{peak}*z*) rather than the conventional method.

### 4.3 Experiments for real 3D object

*z*=45 mm from the lenslet array. The lenslet array used in the experiments consists of 30×30 lenslets and the size of each lenslet is 1.08×1.08 mm. The recorded elemental images have 1020×1020 pixels as shown in Fig. 10(a) and each elemental image is composed of 34×34 pixels. Then, the 3D object correlator is implemented in a computer as shown in the right of Fig. 8. We reconstruct the template of the target 3D object and a series of plane images using two VCR methods as depicted in Fig. 1. The plane images of these experiments are shown in Fig. 10(a) and 10(b) by employing the conventional VCR and the proposed VCR method, respectively. They are the plane images that are reconstructed at

*z*=42 mm, 45mm and 48 mm. The plane image of the distance

*z*=45 mm, where the original 3D object is located, are well-focused.

*z*=45 mm is used as the template for the correlation. And then correlation peaks are calculated by conducting the correlation between the template and a series of the plane images of the two VCR methods. The correlation results are shown in Fig. 11. The curve of correlation peaks from our method along the distance z indicates that the ‘tree’ object is located at

*z*=45 mm because the maximum of correlation peaks are obtained at this distance. Like the previous computational experiments of this paper, the proposed method provides much sharper characteristic of correlation peaks than the conventional method. This situation is obvious when we compare the resulting images from the conventional method with those from the proposed method, as shown in Fig. 10(a) and (b). The three plane images from the conventional method look very similar and thus it is hard to determine the accurate location of the tree object. On the contrary, the three images shown in Fig. 10(b) indicate that the image of distance 45mm looks very different from the others and it is considered to be the best focused image. Consequently, we select the distance 45mm as the location of the tree object. Figure 11 also shows the same contents discussed above. The correlation peaks of distance through 42mm to 48mm are almost the same and thus it is hard to say that the location of the tree object is 45mm or one just find that the location of the tree object is between 42mm and 48mm. On the contrary, the curve of correlation peaks from our method has sharp correlation characteristic, that is, one can find that the maximum location of correlation peaks is 45mm.

## 5. Conclusions

## Acknowledgments

## References and links

1. | G. Lippmann, “La photographic integrale,” C.R. Acad. Sci. |

2. | F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. |

3. | J.-S. Jang and B. Javidi, “Improved viewing resolution of three- dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. |

4. | B. Lee, S. Y. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. |

5. | M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A |

6. | D. -H. Shin, B. Lee, and E. -S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. |

7. | S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express |

8. | D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” Jpn. J. Appl. Phys. |

9. | S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express |

10. | H. Yoo and D. -H. Shin, “Improved analysis on the signal property of computational integral imaging system,” Opt. Express |

11. | D. -H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express |

12. | B. Javidi, R. Ponce-Díaz, and S. -H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. |

13. | J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. |

14. | Y. Frauel and B. Javidi, “Digital Three-Dimensional Image Correlation by Use of Computer-Reconstructed Integral Imaging,” Appl. Opt. |

15. | J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express |

**OCIS Codes**

(100.6890) Image processing : Three-dimensional image processing

(110.2990) Imaging systems : Image formation theory

**ToC Category:**

Imaging Systems

**History**

Original Manuscript: March 27, 2008

Revised Manuscript: May 12, 2008

Manuscript Accepted: May 28, 2008

Published: June 2, 2008

**Citation**

Dong-Hak Shin and Hoon Yoo, "Scale-variant magnification for computational integral imaging and its application to 3D object correlator," Opt. Express **16**, 8855-8867 (2008)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-12-8855

Sort: Year | Journal | Reset

### References

- G. Lippmann, "La photographic integrale," C.R. Acad. Sci. 146, 446-451 (1908).
- F. Okano, H. Hoshino, J. Arai, and I. Yuyama, "Three-dimensional video system based on integral photography," Opt. Eng. 38, 1072-1077 (1999). [CrossRef]
- J.-S. Jang and B. Javidi, "Improved viewing resolution of three- dimensional integral imaging by use of nonstationary micro-optics," Opt. Lett. 27, 324-326 (2002). [CrossRef]
- B. Lee, S. Y. Jung, S.-W. Min, and J.-H. Park, "Three-dimensional display by use of integral photography with dynamically variable image planes," Opt. Lett. 26, 1481-1482 (2001). [CrossRef]
- M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, "Multifacet structure of observed reconstructed integral images," J. Opt. Soc. Am. A 22, 597-603 (2005). [CrossRef]
- D. -H. Shin, B. Lee, and E. -S. Kim, "Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens," Appl. Opt. 45, 7375-7381 (2006). [CrossRef] [PubMed]
- S. -H. Hong, J. -S. Jang, and B. Javidi, "Three-dimensional volumetric object reconstruction using computational integral imaging," Opt. Express 12, 483-491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]
- D.-H. Shin, E.-S. Kim and B. Lee, "Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array," Jpn. J. Appl. Phys. 44, 8016-8018 (2005). [CrossRef]
- S. -H. Hong and B. Javidi, "Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing," Opt. Express 12, 4579-4588 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-19-4579. [CrossRef] [PubMed]
- H. Yoo and D. -H. Shin, "Improved analysis on the signal property of computational integral imaging system," Opt. Express 15, 14107-14114 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-21-14107. [CrossRef] [PubMed]
- D. -H. Shin and H. Yoo, "Image quality enhancement in 3D computational integral imaging by use of interpolation methods," Opt. Express 15, 12039-12049 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-19-12039. [CrossRef] [PubMed]
- B. Javidi, R. Ponce-Díaz, and S. -H. Hong, "Three-dimensional recognition of occluded objects by using computational integral imaging," Opt. Lett. 31, 1106-1108 (2006) [CrossRef] [PubMed]
- J.-S. Park, D.-C. Hwang, D.-H. Shin and E.-S. Kim, "Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images," Opt. Commun. 26, 72-79 (2007). [CrossRef]
- Y. Frauel and B. Javidi, "Digital Three-Dimensional Image Correlation by Use of Computer-Reconstructed Integral Imaging," Appl. Opt. 41, 5488-5496 (2002). [CrossRef] [PubMed]
- J.-H. Park, J. Kim, and B. Lee, "Three-dimensional optical correlator using a sub-image array," Opt. Express 13, 5116-5126 (2005), http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-13-5116. [CrossRef] [PubMed]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.