OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 8, Iss. 1 — Feb. 4, 2013
« Show journal navigation

Computational superposition compound eye imaging for extended depth-of-field and field-of-view

Tomoya Nakamura, Ryoichi Horisaki, and Jun Tanida  »View Author Affiliations


Optics Express, Vol. 20, Issue 25, pp. 27482-27495 (2012)
http://dx.doi.org/10.1364/OE.20.027482


View Full Text Article

Acrobat PDF (1879 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper describes a superposition compound eye imaging system for extending the depth-of-field (DOF) and the field-of-view (FOV) using a spherical array of erect imaging optics and deconvolution processing. This imaging system had a three-dimensionally space-invariant point spread function generated by the superposition optics. A sharp image with a deep DOF and a wide FOV could be reconstructed by deconvolution processing with a single filter from a single captured image. The properties of the proposed system were confirmed by ray-trace simulations.

© 2012 OSA

1. Introduction

Aberrations degrade the optical transfer function (OTF) in imaging optics, resulting in decreased depth-of-field (DOF) and field-of-view (FOV) in conventional imaging systems [1

1. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

]. The framework of computational imaging, which is based on cooperative optical design and image processing, has been applied to solve the problem.

To increase the DOF using this framework, the optical system is designed to equalize the point spread functions (PSFs) along the optical axis with an optical modulation element, and then image processing retrieves a sharp image from the captured image by deconvolution filtering. A cubic phase plate, a spherically aberrated optical system, or a diffuser can be used for axial optical modulation [2

2. J. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef] [PubMed]

5

5. O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion Coding Photography for Extended Depth of Field,” ACM Trans. Graph. (also Proc. of ACM SIGGRAPH) (2010).

]. Axial movement of the object and detector and a change of the lens focus during exposure can also be used to achieve PSF equalization along the optical axis [6

6. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]

, 7

7. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible depth of field photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 58–71 (2011). [CrossRef]

].

On the other hand, computational imaging has been applied to increase the FOV, which is limited by the off-axis aberrations. Typically, a monocentric optical system and/or a lens array equalize the PSFs laterally, and then image processing is applied to rearrange a number of captured images to reconstruct a large image [8

8. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009). [CrossRef] [PubMed]

12

12. L. Li and A. Y. Yi, “Design and fabrication of a freeform microlens array for a compact large-field-of-view compound-eye camera,” Appl. Opt. 51, 1843–1852 (2012). [CrossRef] [PubMed]

]. A multiscale gigapixel camera with an FOV of 120°-by-50° has been demonstrated recently using the same concept [13

13. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012). [CrossRef] [PubMed]

].

We have proposed a method to achieve both a deep DOF and a wide FOV by using computational superposition imaging [14

14. R. Horisaki, T. Nakamura, and J. Tanida, “Superposition imaging for three-dimensionally space-invariant point spread functions,” Appl. Phys. Express 4, 112501 (2011). [CrossRef]

]. In this method, multiple images obtained under different optical conditions are superposed to equalize the PSFs both axially and laterally. Deconvolution filtering is applied to produce an aberration-reduced image. We have verified the principle by an experiment involving mechanical scanning of an aberrated imaging system [15

15. T. Nakamura, R. Horisaki, and J. Tanida, “Experimental verification of computational superposition imaging for compensating defocus and off-axis aberrated images,” in “Computational Optical Sensing and Imaging,” (Optical Society of America, 2012), p. CM2B.4.

].

In this paper, we present a computational superposition imaging technique based on spherical superposition compound eye optics. Computational superposition imaging and compound eye imaging are surveyed and applied to extended DOF and FOV imaging with a single-shot in the following two sections, respectively. In this system, the mechanical scanning employed in Ref. [15

15. T. Nakamura, R. Horisaki, and J. Tanida, “Experimental verification of computational superposition imaging for compensating defocus and off-axis aberrated images,” in “Computational Optical Sensing and Imaging,” (Optical Society of America, 2012), p. CM2B.4.

] can be eliminated to achieve single-shot computational superposition imaging. We describe the principle of the proposed system and show the results of ray-trace simulations performed to verify the effectiveness of the method.

2. Computational superposition imaging

2.1. Previous work

To realize deep-DOF and wide-FOV imaging, computational superposition imaging has been proposed [14

14. R. Horisaki, T. Nakamura, and J. Tanida, “Superposition imaging for three-dimensionally space-invariant point spread functions,” Appl. Phys. Express 4, 112501 (2011). [CrossRef]

]. A schematic diagram of the method is shown in Fig. 1. While changing the focusing distance and the optical axis direction, multiple images of an object are acquired by imaging optics whose PSFs are three-dimensionally space-variant owing to defocus and aberrations. The multiple images are superposed to equalize the PSFs. This superposed image has a single blur kernel at every point in the image. The superposed image can be approximated as an image that is captured by an imaging system with a three-dimensionally space-invariant PSF. A sharp aberration-reduced image is produced from a single superposition image by deconvolution processing with a single filter. With this method, we can acquire an aberration-reduced image within a large three-dimensional space. In other words, a deep-DOF, wide-FOV imaging system may be constructed using optics that are allowed to have defocus and aberrations, with low computational cost.

Fig. 1 Schematic diagram of computational superposition imaging.

2.2. Implementation for single-shot imaging

The mechanical scanning and computational superposition mentioned above can both be emulated optically with a single-shot using a superposition compound eye. A schematic diagram of the optical implementation is shown in Fig. 2(a). The imaging optics is composed of an array of erect imaging optics on a spherical surface and a spherical image sensor. Each pair of elemental erect imaging optics has a different focusing distance and optical axis direction, as indicated in the figure. The images are superposed on the sensor surface by the optical superposition effect of the array of erect imaging optics. The image captured by the whole imaging optics can also be approximated as an image with a three-dimensionally space-invariant PSF. Therefore, computational superposition imaging with a single-shot may be implemented by this type of imaging optics.

Fig. 2 Schematic diagram of single-shot computational superposition imaging based on spherical superposition compound eyes with (a) positive and (b) negative curvatures.

The superposition imaging system in Fig. 2(a) can also be designed with a negative curvature, as shown in Fig. 2(b). The design with a positive curvature in Fig. 2(a) has a magnification smaller than unity, whereas the design with a negative curvature in Fig. 2(b) has a magnification larger than unity [16

16. S. Hiura, A. Mohan, and R. Raskar, “Krill-eye: Superposition compound eye for wide-angle imaging via grin lenses,” IPSJ Transactions on Computer Vision and Applications 2, 186–199 (2010). [CrossRef]

]. In this paper, we demonstrate the implementation in Fig. 2(a) by simulations.

3. Compound eye imaging

3.1. Previous work

Compound eyes of insects are composed of a number of elemental micro-optics, and they have been classified into two types: apposition compound eyes and superposition compound eyes [17

17. D. E. Nilsson, “A new type of imaging optics in compound eyes,” Nature 332, 76–78 (1988). [CrossRef]

]. The diameter of the each elemental optics is about 10 μm to 80 μm [18

18. E. J. Warrant and P. D. McIntyre, “Limitations to resolution in superposition eyes,” J. Comp. Physiol., A 167, 785–803 (1990). [CrossRef]

20

20. S. Laughlin and S. McGinness, “The structures of dorsal and ventral regions of a dragonfly retina,” Cell Tissue Res. 188, 427–447 (1978). [CrossRef] [PubMed]

]. The main difference between the two types is the relationship between the elemental optics and the photoreceptors. In the apposition type, the elemental optics are separated by partitions. Therefore, rays through a single elemental optics reach a single photoreceptor. In the superposition type, the elemental optics, each of which produces an erect image, are not separated. Therefore, rays through multiple elemental optics reach a single photoreceptor. Advantages of the superposition type over the apposition type are greater light efficiency and cutoff frequency because the overall imaging system behaves as a lens with a larger aperture than that of the elemental optics [21

21. J. W. Duparré and F. C. Wippermann, “Micro-optical artificial compound eyes,” Bioinspiration Biomimetics 1, R1 (2006). [CrossRef]

23

23. H. R. Fallah and A. Karimzadeh, “MTF of compound eye,” Opt. Express 18, 12304–12310 (2010). [CrossRef] [PubMed]

]. In comparison with the OTFs of the apposition type, the OTFs of the superposition type are degraded by spherical aberration [24

24. M. F. Land and D.-E. Nilsson, Animal Eyes (Oxford University Press, USA, 2002).

]; however, the degraded OTFs can be restored by postprocessing. The resolution in superposition compound eyes is not limited by diffraction of the elemental optics but by the spherical aberration of the whole imaging optics, which has a high numerical aperture (NA) [18

18. E. J. Warrant and P. D. McIntyre, “Limitations to resolution in superposition eyes,” J. Comp. Physiol., A 167, 785–803 (1990). [CrossRef]

]. Therefore, use of the superposition-type in conjunction with postprocessing may give a greater resolution than the apposition type. Our optical superposition method illustrated in Fig. 2 is inspired by the superposition compound eye.

3.2. Application to computational superposition imaging

In this paper, a gradient index (GRIN) lens is assumed as the elemental erect imaging optics in the superposition compound eye. To construct monocentric imaging optics, a spherical image sensor is required. Some state-of-the-art technologies have realized spherical image sensors [30

30. R. Dinyari, S.-B. Rim, K. Huang, P. B. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008). [CrossRef]

, 31

31. D. Dumas, M. Fendler, F. Berger, B. Cloix, C. Pornin, N. Baier, G. Druart, J. Primot, and E. le Coarer, “Infrared camera based on a curved retina,” Opt. Lett. 37, 653–655 (2012). [CrossRef] [PubMed]

]. Alternatively, a spherical array of planar sensors can approximate a spherical sensor [9

9. D. L. Marks and D. J. Brady, “Gigagon: A monocentric lens design imaging 40 gigapixels,” in “Imaging Systems,” (Optical Society of America, 2010), p. ITuC2.

,10

10. O. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in “IEEE International Conference on Computational Photography (ICCP),” (2011).

].

4. Extending the DOF and FOV

In this section, we analyze spherical superposition compound eye imaging optics for extending the DOF and FOV. DOF extension with the imaging optics is illustrated in Fig. 3, together with definitions of the parameters. The imaging optics is composed of a spherical erect lens array and a spherical image sensor with a common center. Here, for system analysis, the array optics is assumed to consist of ideal aberration-free erect lenses without a diameter or length. This means that an infinite number of aberration-free, infinitesimally small erect lenses are arranged on the spherical surface. One lens is chosen arbitrarily to define the optical axis, as shown in the figure. A pair of lenses with the same angle with respect to the optical axis focuses at a certain distance, and different pairs with different angles focus at different distances. Therefore, the focusing distance can be scanned optically with the spherical superposition compound eye, and the captured images have axially space-invariant PSFs. A sharp, deep-DOF image can be produced by deconvolution processing of a single captured image.

Fig. 3 Schematic diagram of DOF extension using a spherical array of ideal erect lenses.

Every lens defines an optical axis because the proposed spherical superposition compound eye is monocentric, and the PSF is averaged laterally. Therefore, the spherical superposition compound eye extends the FOV and DOF simultaneously. The proposed method ultimately enables us to realize an omni-directional imaging system with a deep-DOF.

The DOF of the proposed system is limited by the scanning range s of the focusing distances. The scanning range s is determined by the paraxial and marginal rays, as shown in Fig. 3. The marginal rays are limited by the virtual pupil in the figure. The virtual pupil is caused by vignetting of each GRIN lens and parallax barriers between the lenses in practice. The vignetting is determined by partitions between the lenses. The partitions prevent stray light from the neighboring lenses. The vignetting and parallax barriers restrict the FOV of each erect lens. They also govern the diameter D of the virtual pupil and the scanning range s. In this paper, the effect caused by the restricted FOV of each lens is emulated by a virtual pupil for simplicity.

If a point source is located at a distance zo ∈ (0, ∞) from the erect lens on the optical axis, the point source is imaged at a distance zimar from the erect lens with the marginal rays. zimar can be expressed as
zimar=(zo+t)(R22r2)2r2R2r2(R22r2)+2(zo+t)R2r2+t,
(1)
where R is the radius of curvature of the lens array surface, r is the radius of the virtual pupil, and t is the axial distance between the marginal and paraxial lens positions. Here, r = D/2 and t=RR2r2. Note that rR/2. The focal length f of the imaging optics can be calculated, with a paraxial approximation, as
f=R2,
(2)
which is introduced from Eq. (1) with r → 0 and zo → ∞. The distance zipar of the image of the point source with the paraxial approximation can be expressed as
zipar=zofzo+f=zoR2zo+R.
(3)
In this paper, zipar is chosen as the position zi of the spherical image sensor for the ray-tracing simulations with a fixed zi described in the following section because zipar is independent of D. Therefore,
zi=zipar.
(4)
Note that the center of curvature of the lens array surface is the same as that of the sensor surface to achieve monocentric optics. Using this configuration, the marginal focusing distance zomar and the paraxial focusing distance zopar in Fig. 3 are given by
zomar=(zit)(R22r2)+2r2R2r2(R22r2)2(zit)R2r2t,
(5)
zopar=zifzi+f=ziR2zi+R,
(6)
respectively. The scanning range s in Fig. 3 can be described as
s=zomarzopar.
(7)
Here, the F-number Fi/# in the image space is defined as
Fi/#=ziD.
(8)
As shown in Eqs. (5)(7), a smaller radius of curvature R of the lens array and a larger diameter D of the virtual pupil increase the scanning range s. This means that a smaller F-number Fi/# increases the scanning range s.

Fig. 4 Geometrical relationship between the pitches of the erect lenses and detectors.

5. Simulations

The system parameters in the simulations are summarized in Table 1, where λ is the wavelength. The misfocus MTFs of the two models are shown with the different diameters D of the virtual pupil to show the impact of the F-number Fi/#. In this case, the radius of the spherical sensor is around 20 mm. Such a spherical sensor may be realized by extending some of the latest technologies available.

Table 1. System parameters in simulations.

table-icon
View This Table
| View All Tables

In the simulations, the refractive index of the GRIN lens was modeled as
n=n0+nd2d2+nd4d4+nd6d6+nzz+nz2z2+nz3z3,
(10)
where d is the radial distance, z is the axial distance, and nvariable is the coefficient of refractive index of each variable and order. The refractive indexes of the GRIN lens were optimized by Zemax. The cost function in the optimization was composed of Seidel aberrations and the root mean square (RMS) of the spot radius. The function was minimized to suppress the aberrations of the elemental GRIN lens under the constraint of minimum and maximum refractive indexes. The minimum and maximum values were set as 1.6 and 1.8, respectively. The obtained coefficients are shown in Table 2. The diameter of the GRIN lenses was chosen based on Eq. (9) with the parameters in Table 1, D = 20 mm, and pd = 8 μm. In this case, the maximum lens pitch, pl, was 88.6 μm.

Table 2. Parameters of GRIN lenses in simulations.

table-icon
View This Table
| View All Tables

The effect of the lens pitch on the MTFs was simulated. Figure 5 compares two MTFs of GRIN lens arrays with different diameters. The sidelobe of the MTF with pl = 80 μm is smoother than that with pl = 200 μm. This is because the former satisfies the condition of Eq. (9), whereas the latter does not.

Fig. 5 MTFs of GRIN lens arrays with different diameters.

The results of ray-tracing with the spherical GRIN lens array with D = 20 mm are shown in Fig. 6. Figures 6(a) and (b) show an overview of the system and the rays passing through the GRIN lenses, respectively. The GRIN lenses are optically separated with partitions to prevent stray light. The imaging optics had spherical aberration, as shown in Fig. 6(c). The spherical aberration was used for DOF extension, as mentioned in Section 3.2.

Fig. 6 Ray-trace of a spherical GRIN lens array. (a) Overview, (b) rays passing through GRIN lenses, and (c) rays near sensor surface.

The misfocus MTFs of the two models are shown in Fig. 7. They were evaluated with the following parameter, called the normalized misfocus parameter Ψ [1

1. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

]:
Ψ=πD24λ(1f1zψ1zi).
(11)
In the simulations, the normalized misfocus parameter Ψ was varied from −90 to +90 by changing the object distance zψ while keeping the position zi of the sensor fixed. The scale of the frequency axis in the plots of Fig. 7 is fixed to twice the maximum sampling frequency of the sensor (2/2pd = 125 cycles/mm). The range of Ψ in this simulation was determined so that the misfocus MTFs of D = 20 mm do not have zero values below the maximum sampling frequency (1/2pd = 62.5 cycles/mm). This range is defined as the effective DOF in this paper. The misfocus MTF of an ideal single lens with a diameter of 20 mm is also shown in Fig. 7(a) as a baseline. The MTFs showed drastic variations with Ψ and had multiple zero values. The defocused images captured by this imaging optics could not be deconvolved with a single filter.

Fig. 7 Misfocus MTFs. (a) MTFs of ideal single lens with a diameter of 20 mm. MTFs of ideal erect lens arrays with (b) D = 5 mm, (c) D = 10 mm, and (d) D = 20 mm. MTFs of GRIN lens arrays with (e) D = 5 mm, (f) D = 10 mm, and (g) D = 20 mm.

Figures 7(b)–(d) show the misfocus MTFs of ideal erect lens arrays with D = 5 mm, D = 10 mm, and D = 20 mm, respectively. Figures 7(e)–(g) show those of GRIN lens arrays with D = 5 mm, D = 10 mm, and D = 20 mm, respectively. The results of both models show that a larger D increased the depth-invariance of the misfocus MTFs and decreased the magnitude of the MTFs. A larger D causes a smaller Fi/#, as shown in Eq. (8). Therefore, the depth-invariance of the misfocus MTFs of the proposed system is inversely proportional to Fi/#, and the magnitude is proportional to Fi/#, respectively. The tradeoff between the depth-invariance and the magnitude can be controlled by Fi/#.

In the simulations, the effective DOF of the proposed system was roughly eight-times shorter than the scanning range s. In contrast, the object scanning method, that changes the object distance during the exposure [6

6. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]

], achieves the same effect over half of the scanning range. Therefore, the effective DOF of the proposed method is four-times shorter than the object scanning method. This is because the former scans the focusing distance with partial (i.e., marginal) rays for mechanical scanning-free imaging, whereas the latter scans the object and uses all rays.

DOF extension was demonstrated with computationally generated images using the MTFs of the ideal single lens and the GRIN lens array with D = 20 mm. The two MTFs are shown in Figs. 7(a) and 7(g), respectively. In this demonstration, the object distance was scanned from Ψ = −90 to Ψ = +90. The object was the Lenna image. The pixel count of each image was 128 × 128. The pixel pitch, pd, was 8 μm, as in the above ray-trace simulation. Figures 8(a) and (b) show images captured by the ideal single lens and GRIN lens array, respectively. The images captured by the GRIN lens array were similarly defocused through the range of misfocus, whereas the in-focus and defocused images captured by the ideal lens were obviously different. Figures 8(c) and (d) show deconvolution results of the captured images without and with additional Gaussian noise whose signal-to-noise ratio (SNR) was 40 dB, respectively. The Wiener filter was applied in the deconvolution processing. The filter was calculated using the MTF at Ψ = 0. Sharp images with a larger DOF compared with the single lens imaging were reconstructed well, even from captured images with noise, by the proposed scheme.

Fig. 8 Performance verification with Lenna image. Images captured by (a) the ideal single lens and (b) the GRIN lens array without noise. Deconvolution results of images captured by the GRIN lens array (c) with and (d) without additional Gaussian noise.

The captured images of the ideal single lens and the deconvoluted images of the GRIN lens array were evaluated using peak signal-to-noise ratio (PSNR), as shown in Fig. 9. The deconvoluted images at misfocus distances of the GRIN lens array had better PSNRs than those captured with the ideal single lens even when 40 dB noise was added to the captured images. A further advantage of the superposition compound eye is the high light efficiency or high measurement SNR, as mentioned in Section 3.1.

Fig. 9 PSNRs of the final images shown in Fig. 8. Note that the PSNR of the GRIN lens array at Ψ = 0 without noise is ∞ dB.

6. Conclusions

In this study, we showed the principle and performance of spherical superposition compound eye optics for computational DOF and FOV extension. This system captures an optically superposed image of an object with different focusing distances and optical axes by using a spherical array of erect imaging optics. The PSFs of the captured image are three-dimensionally space-invariant. The original sharp image with a deep DOF and a wide FOV can be reproduced by deconvolution processing with a single filter from the single captured image. The system model and simulations were presented.

The misfocus MTFs of arrays of ideal erect lenses and GRIN lenses were analyzed with ray-tracing to verify the DOF extension. The depth-invariance of the proposed system was larger than that of a conventional, non-computational imaging system. The range of DOF extension was the almost one-fourth compared with that of the object scanning method in Ref. [6

6. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]

]. However, our method also extends the FOV simultaneously with a single-shot.

Acknowledgments

The authors thank Prof. Yasuhiro Awatsuji at Kyoto Institute of Technology for his technical support in this project.

References and links

1.

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

2.

J. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef] [PubMed]

3.

Y. Takahashi and S. Komatsu, “Optimized free-form phase mask for extension of depth of field in wavefront-coded imaging,” Opt. Lett. 33, 1515–1517 (2008). [CrossRef] [PubMed]

4.

P. Mouroulis, “Depth of field extension with spherical optics,” Opt. Express 16, 12995–13004 (2008). [CrossRef] [PubMed]

5.

O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion Coding Photography for Extended Depth of Field,” ACM Trans. Graph. (also Proc. of ACM SIGGRAPH) (2010).

6.

G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]

7.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible depth of field photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 58–71 (2011). [CrossRef]

8.

D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009). [CrossRef] [PubMed]

9.

D. L. Marks and D. J. Brady, “Gigagon: A monocentric lens design imaging 40 gigapixels,” in “Imaging Systems,” (Optical Society of America, 2010), p. ITuC2.

10.

O. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in “IEEE International Conference on Computational Photography (ICCP),” (2011).

11.

G. Druart, N. Guérineau, R. Haïdar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos peckii vision,” Appl. Opt. 48, 3368–3374 (2009). [CrossRef] [PubMed]

12.

L. Li and A. Y. Yi, “Design and fabrication of a freeform microlens array for a compact large-field-of-view compound-eye camera,” Appl. Opt. 51, 1843–1852 (2012). [CrossRef] [PubMed]

13.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012). [CrossRef] [PubMed]

14.

R. Horisaki, T. Nakamura, and J. Tanida, “Superposition imaging for three-dimensionally space-invariant point spread functions,” Appl. Phys. Express 4, 112501 (2011). [CrossRef]

15.

T. Nakamura, R. Horisaki, and J. Tanida, “Experimental verification of computational superposition imaging for compensating defocus and off-axis aberrated images,” in “Computational Optical Sensing and Imaging,” (Optical Society of America, 2012), p. CM2B.4.

16.

S. Hiura, A. Mohan, and R. Raskar, “Krill-eye: Superposition compound eye for wide-angle imaging via grin lenses,” IPSJ Transactions on Computer Vision and Applications 2, 186–199 (2010). [CrossRef]

17.

D. E. Nilsson, “A new type of imaging optics in compound eyes,” Nature 332, 76–78 (1988). [CrossRef]

18.

E. J. Warrant and P. D. McIntyre, “Limitations to resolution in superposition eyes,” J. Comp. Physiol., A 167, 785–803 (1990). [CrossRef]

19.

M. F. Land, F. A. Burton, and V. B. Meyer-Rochow, “The optical geometry of euphausiid eyes,” J. Comp. Physiol., A 130, 49–62 (1979). [CrossRef]

20.

S. Laughlin and S. McGinness, “The structures of dorsal and ventral regions of a dragonfly retina,” Cell Tissue Res. 188, 427–447 (1978). [CrossRef] [PubMed]

21.

J. W. Duparré and F. C. Wippermann, “Micro-optical artificial compound eyes,” Bioinspiration Biomimetics 1, R1 (2006). [CrossRef]

22.

K. Stollberg, A. Brückner, J. Duparré, P. Dannberg, A. Bräuer, and A. Tünnermann, “The gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects,” Opt. Express 17, 15747–15759 (2009). [CrossRef] [PubMed]

23.

H. R. Fallah and A. Karimzadeh, “MTF of compound eye,” Opt. Express 18, 12304–12310 (2010). [CrossRef] [PubMed]

24.

M. F. Land and D.-E. Nilsson, Animal Eyes (Oxford University Press, USA, 2002).

25.

J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): Concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]

26.

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005). [CrossRef] [PubMed]

27.

A. Brückner, J. Duparré, R. Leitel, P. Dannberg, A. Bräuer, and A. Tünnermann, “Thin wafer-level camera lenses inspired by insect compound eyes,” Opt. Express 18, 24379–24394 (2010). [CrossRef] [PubMed]

28.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007). [CrossRef]

29.

R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express 18, 19367–19378 (2010). [CrossRef] [PubMed]

30.

R. Dinyari, S.-B. Rim, K. Huang, P. B. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008). [CrossRef]

31.

D. Dumas, M. Fendler, F. Berger, B. Cloix, C. Pornin, N. Baier, G. Druart, J. Primot, and E. le Coarer, “Infrared camera based on a curved retina,” Opt. Lett. 37, 653–655 (2012). [CrossRef] [PubMed]

32.

“Zemax,” http://www.zemax.com/.

33.

K.-H. Jeong, J. Kim, and L. P. Lee, “Biologically inspired artificial compound eyes,” Science 312, 557–561 (2006). [CrossRef] [PubMed]

34.

D. Keum, J. Hyukjin, and J. Ki-Hun, “Planar emulation of natural compound eyes,” Small 8, 2169–2173 (2012). [CrossRef] [PubMed]

35.

S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in “Proceedings of the SPIE,” (2006), p. 63920E. [CrossRef]

OCIS Codes
(220.1000) Optical design and fabrication : Aberration compensation
(110.1758) Imaging systems : Computational imaging

ToC Category:
Imaging Systems

History
Original Manuscript: September 4, 2012
Revised Manuscript: November 2, 2012
Manuscript Accepted: November 14, 2012
Published: November 27, 2012

Virtual Issues
Vol. 8, Iss. 1 Virtual Journal for Biomedical Optics

Citation
Tomoya Nakamura, Ryoichi Horisaki, and Jun Tanida, "Computational superposition compound eye imaging for extended depth-of-field and field-of-view," Opt. Express 20, 27482-27495 (2012)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-20-25-27482


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
  2. J. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt.34, 1859–1866 (1995). [CrossRef] [PubMed]
  3. Y. Takahashi and S. Komatsu, “Optimized free-form phase mask for extension of depth of field in wavefront-coded imaging,” Opt. Lett.33, 1515–1517 (2008). [CrossRef] [PubMed]
  4. P. Mouroulis, “Depth of field extension with spherical optics,” Opt. Express16, 12995–13004 (2008). [CrossRef] [PubMed]
  5. O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion Coding Photography for Extended Depth of Field,” ACM Trans. Graph. (also Proc. of ACM SIGGRAPH) (2010).
  6. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun.6, 38–42 (1972). [CrossRef]
  7. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible depth of field photography,” IEEE Trans. Pattern Anal. Mach. Intell.33, 58–71 (2011). [CrossRef]
  8. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express17, 10659–10674 (2009). [CrossRef] [PubMed]
  9. D. L. Marks and D. J. Brady, “Gigagon: A monocentric lens design imaging 40 gigapixels,” in “Imaging Systems,” (Optical Society of America, 2010), p. ITuC2.
  10. O. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in “IEEE International Conference on Computational Photography (ICCP),” (2011).
  11. G. Druart, N. Guérineau, R. Haïdar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos peckii vision,” Appl. Opt.48, 3368–3374 (2009). [CrossRef] [PubMed]
  12. L. Li and A. Y. Yi, “Design and fabrication of a freeform microlens array for a compact large-field-of-view compound-eye camera,” Appl. Opt.51, 1843–1852 (2012). [CrossRef] [PubMed]
  13. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486, 386–389 (2012). [CrossRef] [PubMed]
  14. R. Horisaki, T. Nakamura, and J. Tanida, “Superposition imaging for three-dimensionally space-invariant point spread functions,” Appl. Phys. Express4, 112501 (2011). [CrossRef]
  15. T. Nakamura, R. Horisaki, and J. Tanida, “Experimental verification of computational superposition imaging for compensating defocus and off-axis aberrated images,” in “Computational Optical Sensing and Imaging,” (Optical Society of America, 2012), p. CM2B.4.
  16. S. Hiura, A. Mohan, and R. Raskar, “Krill-eye: Superposition compound eye for wide-angle imaging via grin lenses,” IPSJ Transactions on Computer Vision and Applications2, 186–199 (2010). [CrossRef]
  17. D. E. Nilsson, “A new type of imaging optics in compound eyes,” Nature332, 76–78 (1988). [CrossRef]
  18. E. J. Warrant and P. D. McIntyre, “Limitations to resolution in superposition eyes,” J. Comp. Physiol., A167, 785–803 (1990). [CrossRef]
  19. M. F. Land, F. A. Burton, and V. B. Meyer-Rochow, “The optical geometry of euphausiid eyes,” J. Comp. Physiol., A130, 49–62 (1979). [CrossRef]
  20. S. Laughlin and S. McGinness, “The structures of dorsal and ventral regions of a dragonfly retina,” Cell Tissue Res.188, 427–447 (1978). [CrossRef] [PubMed]
  21. J. W. Duparré and F. C. Wippermann, “Micro-optical artificial compound eyes,” Bioinspiration Biomimetics1, R1 (2006). [CrossRef]
  22. K. Stollberg, A. Brückner, J. Duparré, P. Dannberg, A. Bräuer, and A. Tünnermann, “The gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects,” Opt. Express17, 15747–15759 (2009). [CrossRef] [PubMed]
  23. H. R. Fallah and A. Karimzadeh, “MTF of compound eye,” Opt. Express18, 12304–12310 (2010). [CrossRef] [PubMed]
  24. M. F. Land and D.-E. Nilsson, Animal Eyes (Oxford University Press, USA, 2002).
  25. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): Concept and experimental verification,” Appl. Opt.40, 1806–1813 (2001). [CrossRef]
  26. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt.44, 2949–2956 (2005). [CrossRef] [PubMed]
  27. A. Brückner, J. Duparré, R. Leitel, P. Dannberg, A. Bräuer, and A. Tünnermann, “Thin wafer-level camera lenses inspired by insect compound eyes,” Opt. Express18, 24379–24394 (2010). [CrossRef] [PubMed]
  28. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev.14, 347–350 (2007). [CrossRef]
  29. R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express18, 19367–19378 (2010). [CrossRef] [PubMed]
  30. R. Dinyari, S.-B. Rim, K. Huang, P. B. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett.92, 091114 (2008). [CrossRef]
  31. D. Dumas, M. Fendler, F. Berger, B. Cloix, C. Pornin, N. Baier, G. Druart, J. Primot, and E. le Coarer, “Infrared camera based on a curved retina,” Opt. Lett.37, 653–655 (2012). [CrossRef] [PubMed]
  32. “Zemax,” http://www.zemax.com/ .
  33. K.-H. Jeong, J. Kim, and L. P. Lee, “Biologically inspired artificial compound eyes,” Science312, 557–561 (2006). [CrossRef] [PubMed]
  34. D. Keum, J. Hyukjin, and J. Ki-Hun, “Planar emulation of natural compound eyes,” Small8, 2169–2173 (2012). [CrossRef] [PubMed]
  35. S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in “Proceedings of the SPIE,” (2006), p. 63920E. [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited