## Three-dimensional optical correlator using a sub-image array

Optics Express, Vol. 13, Issue 13, pp. 5116-5126 (2005)

http://dx.doi.org/10.1364/OPEX.13.005116

Acrobat PDF (1410 KB)

### Abstract

A three-dimensional optical correlator using a lens array is proposed and demonstrated. The proposed method captures three-dimensional objects using the lens array and transforms them into sub-images. Through successive two-dimensional correlations between the sub-images, a three-dimensional optical correlation is accomplished. As a result, the proposed method is capable of detecting out-of-plane rotations of three-dimensional objects as well as three-dimensional shifts.

© 2005 Optical Society of America

## 1. Introduction

1. T.-C. Poon and T. Kim, “Optical image recognition of three-dimensional objects,” Appl. Opt. **38**, 370–381 (1999). [CrossRef]

2. B. Javidi and E. Tajahuerce, “Three-dimensional object recognition by use of digital holography,” Opt. Lett. **25**, 610–612 (2000). [CrossRef]

3. J. Rosen, “Three-dimensional optical Fourier transform and correlation,” Opt. Lett. **22**, 964–966 (1997). [CrossRef] [PubMed]

4. J. Rosen, “Three-dimensional joint transform correlator,” Appl. Opt. **37**, 7438–7544 (1998). [CrossRef]

5. J. Esteve-Taboada, D. Mas, and J. Garcia, “Three-dimensional object recognition by Fourier transform profilometry,” Appl. Opt. **38**, 4760–4765 (1999). [CrossRef]

6. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. **40**, 3318–3325 (2001). [CrossRef]

## 2. Principle

9. C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in *Conference on Stereoscopic Display and Virtual Reality Systems IX* ,
A.J. Woods, J.O. Merritt, S.A. Benton, and M.T. Bolas eds., Proc. SPIE **4660**, 135–145 (2002).

10. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. **43**, 4882–4895 (2004). [CrossRef] [PubMed]

1. T.-C. Poon and T. Kim, “Optical image recognition of three-dimensional objects,” Appl. Opt. **38**, 370–381 (1999). [CrossRef]

1. T.-C. Poon and T. Kim, “Optical image recognition of three-dimensional objects,” Appl. Opt. **38**, 370–381 (1999). [CrossRef]

**38**, 370–381 (1999). [CrossRef]

**38**, 370–381 (1999). [CrossRef]

*i,j*] form the [

*i,j*]th sub-image. Each sub-image in Fig. 2(b) consists of 6(H)×4(V) pixels since there are 6(H)×4(V) elemental images.

*i*-th sub-image (collection of blue dots in Fig. 2(a)) contains the perspective of the object observed in an angle given by

*y*is the position of the

_{i}*i*-th pixel with respect to the optic axis of the corresponding elemental lens. Note that in an ordinary imaging system the angle of observation is determined by the relative position of the imaging lens with respect to the object. This observation-angle dependency on the object position, however, is removed in the sub-image. Figure 3 demonstrates this point. In the case of an ordinary imaging system shown in Fig. 3(a), the captured perspective of the object changes as the object moves from position 1 to position 2. With reference to Fig. 3(a), when the object is located at position 1, the imaging lens observes the object at an angle of

*θ*

_{observation}and the corresponding oblique perspective of the object is captured. On the contrary, when the object is located at position 2, the imaging lens faces the object at an angle of 0° and thus a center perspective of the object is captured. In the sub-image, however, the perspective of the object contained in each sub-image is the same regardless of the object shift as shown in Fig. 3(b): the sub-image corresponding to red pixels observes the object with an angle of 0° and the sub-image corresponding to blue pixels observes the object with

*θ*

_{sub,i}for both positions 1 and 2. The angle-invariance of the sub-image makes it possible to select certain angle of observation deterministically regardless of the object position.

*φ*as shown in Fig. 2, and thus the size of the object perspective in the sub-image is constant. When the object depth changes, only the position of the object perspective is changed in each sub-image but the size itself is not changed. For example, suppose that an object whose transverse size covers 5 elemental lenses is imaged by the lens array shown in Fig. 4(b). The size of the object perspective in the sub-image is determined by the number of the sub-image parallel lines that intersect the object. In Fig. 4(b), it is 5 pixel size for the sub-image corresponding to the red dots, and 6 pixel size for the sub-image corresponding to the blue dots. When the object moves longitudinally as shown in lower diagram in Fig. 4(b), the number of parallel lines intersecting the object is still 5 for red dots and 6 for blue dots, and thus the size of the object perspective in those sub-images is not changed. Only the position of the object perspective in the sub-image is changed (by 2 pixels for the sub-image corresponding to the blue dots and 0 pixels for the sub-image corresponding to the red dots in Fig. 4(b)). This size-invariant feature removes the necessity for any scale-invariant detection techniques such as a Mellin transform, even though the signal object shifts in the depth direction.

*y*) and the signal object is located at (

_{r}, z_{r}*y*) as shown in Fig. 2(a). First, let us assume that the signal object has no out-of-plane rotation for the sake of simplicity; i.e.

_{s}, z_{s}*θ*=0° in Fig. 2(a). Since there is no out-of-plane rotation, the perspective of the object contained in the

_{y-z}*i*-th sub-image of the signal object is the same as that contained in

*i*-th sub-image of the reference object. Note that this is true irrespective of where the signal object is located with respect to the reference object due to the observation angle invariance property of the sub-image. Also note that the sizes of the perspectives in these two sub-images for the reference and signal objects are the same due to the size-invariance property. The position of the perspective in the i-th sub-image is given by

*u*=(1/

_{r,i}*φ*)(

*y*+

_{r}*z*tan

_{r}*θ*

_{sub,y-z,i}) for the reference object and

*u*=(1/

_{s,i}*φ*)(

*y*+

_{s}*z*tan

_{s}*θ*

_{sub,y-z,i}) for the signal object. Their position difference Δ

_{ur,i,s,i}can be written by

*i*-th sub-images of the reference and signal objects contain the same perspective of an object with the same size, the position difference Δ

*u*can be detected by correlating the

_{r,i,s,i}*i*-th sub-images of the reference and the signal objects using JTC. In Eq. (2), only

*y*and

_{s}*z*are unknowns and thus, the 3D shift in the signal object can be found through two correlation operations with different

_{s}*i*′s.

*θ*of the signal object, we cannot find the 3D shift by correlating the reference object sub-image with the signal object sub-image of the same index because they will, in general, contain different perspectives of the object. In this case, the sub-image pair that contains the same perspective of the object should be found first, in other words the out-of-plane rotation should be detected first. The 3D shift can then be found considering the out-of-plane rotation. The out-of-plane rotation angle

_{y-z}*θ*of the signal object is detected by correlating one arbitrarily chosen sub-image for the reference object with every sub-image of the signal object successively. Among them, the sub-image pair yielding the strongest correlation peak will satisfy

_{y-z}*θ*

_{sub,y-z,i}-

*θ*

_{sub,y-z,j}=

*θ*where

_{y-z}*θ*

_{sub,y-z,i}is the angle of observation of the

*i*-th sub-image of the reference object and

*θ*

_{sub,y-z,j}is that of the

*j*-th sub-image of the signal object, since they have the same perspective of the object. Therefore, by finding the sub-image pair that produces the strongest correlation peak, the out-of-plane rotation angle

*θ*is detected. After

_{y-z}*θ*is detected, the 3D position of the signal object can also be detected by correlating two more sub-image pairs as a no out-of-plane rotation case. In this case, however, we correlate the

_{y-z}*i*-th sub-image of the reference object with the

*j*-th sub-image of the signal object where

*θ*

_{sub,y-z,i}-

*θ*

_{sub,y-z,j}=

*θ*since they have the same perspective. The position difference Δ

_{y-z}*u*of the object perspectives in

_{r,i,s,j}*i*-th reference sub-image and

*j*-th signal sub-image is given by

*θ*

_{sub,y-z,i}for the reference object and

*θ*

_{sub,y-z,i}+

*θ*for the signal object and measuring the positions of their correlation peaks. Figure 5 shows the overall procedure used in the proposed method.

_{y-z}*θ*is given by Δ

*θ*=

*θ*

_{sub,i+1}-

*θ*

_{sub,i}. Since the observation angle of

*i*-th sub-image

*θ*

_{sub,i}is given by Eq. (1), the angular resolution Δ

*θ*becomes

*s*is the pixel pitch at the image plane of the lens array. The angular range Ω that can be detected in the proposed method is determined by the range of the observation angle of the sub-images (range of

*θ*

_{sub}). Since

*y*is restricted by -

_{i}*φ*/2<

*y*<

_{i}*φ*/2, the angular range Ω becomes

## 3. Experimental results

*θ*

_{sub,x-z,i}-vs.-Δ

*u*line corresponds to (

*z*) and its Δ

_{s}-z_{r}*u*-offset corresponds to (

*x*) and

_{s}-x_{r}*z*. The experimental results shown in Figs. 9 and 10 demonstrate this point clearly. The slope increases as the signal object moves farther from the reference object longitudinally (see the second to sixth graphs in Figs. 9 and 10), and the Δ

_{s}*u*-offset reflects (

*x*) in the case of no rotation (see the first graph in Figs. 9) or (

_{s}-x_{r}*x*) and

_{s}-x_{r}*z*in the case of rotation (see every graph in Figs. 10). This provides convincing support for the 3D shift detection capability of the proposed method.

_{s}## 4. Conclusion

## Acknowledgments

## References and links

1. | T.-C. Poon and T. Kim, “Optical image recognition of three-dimensional objects,” Appl. Opt. |

2. | B. Javidi and E. Tajahuerce, “Three-dimensional object recognition by use of digital holography,” Opt. Lett. |

3. | J. Rosen, “Three-dimensional optical Fourier transform and correlation,” Opt. Lett. |

4. | J. Rosen, “Three-dimensional joint transform correlator,” Appl. Opt. |

5. | J. Esteve-Taboada, D. Mas, and J. Garcia, “Three-dimensional object recognition by Fourier transform profilometry,” Appl. Opt. |

6. | O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. |

7. | Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of compter-reconstructed integral imaging,” Appl. Opt. |

8. | J.-H. Park, S. Jung, H. Choi, and B Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neur. Net. |

9. | C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in |

10. | J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. |

**OCIS Codes**

(100.4550) Image processing : Correlators

(100.6890) Image processing : Three-dimensional image processing

(110.2990) Imaging systems : Image formation theory

(110.6880) Imaging systems : Three-dimensional image acquisition

**ToC Category:**

Research Papers

**History**

Original Manuscript: March 22, 2005

Revised Manuscript: June 17, 2005

Published: June 27, 2005

**Citation**

Jae-Hyeung Park, Joohwan Kim, and Byoungho Lee, "Three-dimensional optical correlator using a sub-image array," Opt. Express **13**, 5116-5126 (2005)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-13-5116

Sort: Journal | Reset

### References

- T.-C. Poon and T. Kim, �??Optical image recognition of three-dimensional objects,�?? Appl. Opt. 38, 370-381 (1999). [CrossRef]
- B. Javidi and E. Tajahuerce, �??Three-dimensional object recognition by use of digital holography,�?? Opt. Lett. 25, 610-612 (2000). [CrossRef]
- J. Rosen, �??Three-dimensional optical Fourier transform and correlation,�?? Opt. Lett. 22, 964-966 (1997). [CrossRef] [PubMed]
- J. Rosen, �??Three-dimensional joint transform correlator,�?? Appl. Opt. 37, 7438-7544 (1998). [CrossRef]
- J. Esteve-Taboada, D. Mas, and J. Garcia, �??Three-dimensional object recognition by Fourier transform profilometry,�?? Appl. Opt. 38, 4760-4765 (1999). [CrossRef]
- O. Matoba, E. Tajahuerce, and B. Javidi, �??Real-time three-dimensional object recognition with multiple perspectives imaging,�?? Appl. Opt. 40, 3318-3325 (2001). [CrossRef]
- Y. Frauel, and B. Javidi, �??Digital three-dimensional image correlation by use of compter-reconstructed integral imaging,�?? Appl. Opt. 41, 5488-5496 (2002). [CrossRef] [PubMed]
- J.-H. Park, S. Jung, H. Choi, and B Lee, �??Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,�?? Opt. Mem. Neur. Net. 11, 181-188 (2002).
- C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, �??Depth extraction from unidirectional image using a modified multi-baseline technique,�?? in Conference on Stereoscopic Display and Virtual Reality Systems IX, A.J. Woods, J.O. Merritt, S.A. Benton, M.T. Bolas eds., Proc. SPIE 4660, 135-145 (2002).
- J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, �??Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,�?? Appl. Opt. 43, 4882-4895 (2004 [CrossRef] [PubMed]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.