## Low-cost miniature wide-angle imaging for self-motion estimation

Optics Express, Vol. 13, Issue 16, pp. 6061-6072 (2005)

http://dx.doi.org/10.1364/OPEX.13.006061

Acrobat PDF (369 KB)

### Abstract

This paper examines the performance of a low-cost, miniature, wide field-of-view (FOV) visual sensor that includes advanced pinhole optics and most recent CMOS imager technology. The pinhole camera may often be disregarded because of its apparent simplicity, low aperture and image finesse. However, its angular field can be dramatically improved using only a few off-the-shelf micro-optical elements. With modern high-sensitivity silicon-based digital retina, we show that it could be a practical device for developing self-motion estimation sensor in mobile applications, such as stabilization of a robotic micro flyer.

© 2005 Optical Society of America

## 1. Introduction

1. D.V. Wick, T. Martinez, S.R. Restaino, and B.R. Stone, “Foveated imaging demonstration,” Opt. Express **10**, 60–65 (2002). [PubMed]

2. R. Volkel, M. Eisner, and K.J. Weible, “Miniaturized imaging system,” J. Microelectronic Engineering, Elsevier Science , **67-68**, 461–472 (2003). [CrossRef]

## 2. Motion estimation from spherical images

3. J. Neumann, C. Fermuller, and Y. Aloimonos, “Eyes from eyes: new cameras for structure from motion,” in *IEEE Proceedings of Third Workshop on Omnidirectional Vision* (Copenhagen, Denmark, 2002), 19–26. [CrossRef]

*m*of rays captured from each scene point increases; it relates scene dependence of the motion estimation. The latter improves as the FOV gets wider. As noted in [3

3. J. Neumann, C. Fermuller, and Y. Aloimonos, “Eyes from eyes: new cameras for structure from motion,” in *IEEE Proceedings of Third Workshop on Omnidirectional Vision* (Copenhagen, Denmark, 2002), 19–26. [CrossRef]

*polydioptric spherical eyes*are very well suited for motion sensing and they offer a number of advantages over conventional single-pinhole cameras with a narrow FOV. The reason is that a

*polydioptric spherical eye*incorporates the typical properties of insects’ apposition

*compound eyes*such as omni-directional distribution of discrete light receptors in a spherical FOV and overlapping local receptive fields of the individual receptor units (cf

*m*>1). Prototypes of polydioptric spherical eyes have been reported in the literature based on: one photoreceptor per view direction [4

4. T. Netter and N. Franceschini, “A robotic aircraft that follows terrain using a neuro-morphic eye,” in *IEEE Proceedings of Conference on Intelligent Robots and Systems* (Lausanne, Switzerland, 2002), 129–134. [CrossRef]

6. R. Hornsey, P. Thomas, W. Wong, S. Pepic, K. Yip, and R. Krishnasamy, “Electronic compound eye image sensor: construction and calibration,” in *Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications V*,
M. MBlouke, N. Sampat, and R. Motta, eds., Proc. SPIE **5301**, 13–24 (San Jose, US Calif., 2004). [CrossRef]

8. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. **40** (11), 1806–1813 (2001). [CrossRef]

11. P. Baker, R. Pless, C. Fermuller, and Y. Aloimonos, “New eyes for shape and motion estimation,” in *IEEE Proceedings of the first international Workshop on Biologically Motivated Computer Vision*, Lectures Notes in Computer Science **1811**, Springer-Verlag eds. (2000), 118–128. [CrossRef]

*M×N*photoreceptors of a single-viewpoint spherical camera are arranged onto a unit sphere (see Fig. 1(a)), such that each pixel defines a viewing direction

*d*

_{i}(

*i*=

*1,2,*…

*M×N*). For the spherical camera moving rigidly with respect to its 3D static environment with translational and angular velocities, respectively

*T*=(

*T*

_{x}

*,T*

_{y}

*,T*

_{z}) and

*R*=(

*R*

_{x}

*,R*

_{y}

*,R*

_{z}), the equation that relates the spherical motion field

*p*

_{i}(motion parallax) to the ego-motion parameters is given by [12

12. J.J. Koenderink and A.J. Van Doorn, “Facts on optic flow,” J. Biol. Cybern. **56**, Springer-Verlag eds. (1987), 247–254. [CrossRef]

*D*

_{i}is the distance between the unit sphere and the object pointed by the camera in the direction

*d*

_{i}. Although

*p*

_{i}, is a 3D vector (tangential to sensor surface), it is constrained to be orthogonal to the unit vector

*d*

_{i}and can be described as a 2D tangent vector in a local coordinate system after projective transformation (i.e., the tangential optical flow field

*p*

_{i}=[∂

*θ*/∂

*t*;∂

*ϕ*/∂

*t*]

*, =*

^{T}_{di}*x*

_{i}.

*u*+

*y*

_{i}.

*v*).

*p*

_{i}, is the sum of one rotational flow and one translational flow. The translational component is inversely proportional to the range of a scene point

*D*

_{i}. Therefore only the direction of translation

*t*=

*T*/|

*T*| (also known as the

*focus of expansion*) can be estimated. By replacing the translational velocity vector in Eq. (1) by

*t*and eliminating the range

*D*

_{i}(since

*t×d*

_{i}is perpendicular to

*t-d*

_{i}), one obtains the

*epipolar constraint*Eq. (2) as follows.

14. M. Franz, J. Chahl, and H. Krapp, “Insect-inspired estimation of egomotion,” J. Neural Computation **16**, 2245–2260 (2004). [CrossRef]

### 2.1 Influence of the field-of-view

15. G. Adiv, “Inherent ambiguities in recovering 3D motion and structure from a noisy field,” IEEE Trans. Pattern Anal. Mach. Intell. **11** (5), 477–489 (1989). [CrossRef]

*the line*and

*the orthogonality constraints*. These constraints described in [16] are presented below.

*I*

_{i}

*(t)*of the image point in the direction

*d*

_{i}

*(θ,ϕ)*keeps the same during

*dt*, we obtain the

*image brightness constancy constraint*

*∂I*

_{i}

*/∂t*lies on the plane (

*u*,

*v*) tangent to the spherical sensor, for a narrow FOV (i.e,. little variations of

*d*

_{i}), the component

*(t∙d*

_{i}

*)d*

_{i}does not contribute to the first term

*(∂I*

_{i}

*/∂d*

_{i}

*)∙(t/D*

_{i}

*)*into Eq. (4). Therefore the component of the direction of translation

*t*parallel to

*d*

_{i}cannot be accurately recovered. Secondly, we see that the set of vectors {

*∂I*

_{i}

*/∂d*

_{i}

*, d*

_{i}

*×∂I*

_{i}

*/∂d*

_{i}} forms an orthogonal basis. Therefore, for a narrow FOV, a rotational error

*R*

_{e}=

*(R*

_{xe}

*,R*

_{ye}

*,R*

_{ze}

*)*in the estimation of the rotation vector

*R*can compensate a translational error

*t*

_{e}=

*(θ*

_{e}

*,ϕ*

_{e}

*,0)*in the estimation of

*t*without violating the constraint in Eq. (4). This is true as long as the projections of

*R*

_{e}and

*t*

_{e}onto the tangent plane (

*u*,

*v*) at

*d*

_{i}are orthogonal.

*t*are available in highly different directions (e.g.,

*θ*∈ [-π/2;π/2] and

*ϕ*∈ [0;π] for a hemisphere of viewing direction) and (ii) the orthogonality constraint on the rotational and translational errors cannot be satisfied in all the directions. Thus, for a very large FOV, distinct motion fields can be expected even in the presence of noise.

### 2.2 Robustness to noise

*θ*and the image coordinates

*(x,y)*is invariant (i.e.,

*θ*=

*arctan(y/x)*). These assumptions allow mapping the optic flow vectors

*pi*’ computed on the image plane to a sphere without introducing artifacts, using the following transformation [10]

*∂ϕ/∂x*and

*∂ϕ/∂y*in the

*Jacobian*matrix depends on the projection function

*f(ϕ);ϕ:ℜ*

_{+}→

*r: ℜ*

_{+}given by the geometry of the imaging device. As a matter of fact, the primary source of errors in the computation of egomotion comes from the accuracy of optic flow techniques and their sensitivity to noise.

18. S. Srinivasan and R. Chellappa, “Noise-resilient estimation of optical flow by use of overlapped basis functions,” J. Opt. Soc. Am. A **16**, 493–507 (1999). [CrossRef]

*et al*. [18

18. S. Srinivasan and R. Chellappa, “Noise-resilient estimation of optical flow by use of overlapped basis functions,” J. Opt. Soc. Am. A **16**, 493–507 (1999). [CrossRef]

*et al*. approach has no recourse to iterative search or multi-resolution technique. Eq. (3) is basically linearised by modeling the motion fields

*x*

_{i}and

*y*

_{i}in terms of a weighted sum of basis functions (e.g., cosine window). The least-squares solution (in terms of its optimality quantified by the bias and covariance of the estimates) of the resulting linear system gives an accurate estimation of the weights that constitute the model parameters. At noise level of 20db, the mean value of the flow vector error angle (over all image points and measured on real data) varies typically between 2.62° and 4.28°.

## 3. Low-cost hemispherical pinhole camera

21. T. Martinez, D.V. Wick, and S.R. Restaino, “Foveated, wide field-of-view imaging system using a liquid crystal spatial light modulator,” Opt. Express **8** (10), 555–560 (2001). [CrossRef] [PubMed]

2. R. Volkel, M. Eisner, and K.J. Weible, “Miniaturized imaging system,” J. Microelectronic Engineering, Elsevier Science , **67-68**, 461–472 (2003). [CrossRef]

### 3.1 Photon shot noise limit in CMOS imager

23. R. Constantini and S. Susstrunk, “Virtual sensor design,” in *Sensors and Camera Systems for Scientific -Industrial and Digital Photography Applications*, Proc. SPIE **5301**, 408–419 (2004). [CrossRef]

*Poisson*statistics of the incoming light (i.e., random process of photons detection):

*n̄*is the average arrival rate of photons reaching a pixel and

*η*is the external quantum efficiency.

*η*relates the fraction of the incident photon flux that contributes to the photocurrent in the pixel as a function of wavelength; it comprises both internal quantum efficiency (silicon substrate capability to convert the light energy to electrical energy) and optical efficiency (light sensitivity depends on the geometric arrangement of the photo-detector within an image sensor pixel and the pixel location with respect to the imaging optics axis). The signal-to-quantization noise ratio gives another upper limit of approximately 6×N decibels for an N-bit resolution imaging system (cf. SQNR =20∙

*log*(2

^{N}/0.5) ≈ 6.02 N dB).

*n*that reach a single pixel within the exposure time Δ

*t*determines the minimal required illuminance

*I*onto the image plane. This amount of light

*I*incident on the photocells of the sensor is also affected by the light gathering capability of the imaging lens. For instance, by assuming that the lens transmittance is perfect, then the light throughput is inversely proportional to the square of the relative aperture

*F*

_{/#}[24

24. T.H. Nilsson, “Incident photometry: specifying stimuli for vision and light detectors,” Appl. Opt. **22**, 3457–3464 (1983). [CrossRef] [PubMed]

### 3.2 Pinhole optics design and performance

25. M. Young, “Pinhole optics,” Appl. Opt. **10**, 2763–2767 (1971). [CrossRef] [PubMed]

*λ*is the wavelength,

*d*the object distance,

*f*the focal length (or image distance), and

*β*the

*Petzval*constant (

*β*=2). Therefore, we approximate the relative aperture (ratio of the focal length

*f*to the diameter Ø for lensless imaging) by

*θ*

_{max}is the maximum angle of incidence of light rays (i.e. half the FOV angle) and

*r*defines the size of projection onto the sensor. Given this optimum configuration in the sense of diffraction limit minimization and using Eq. (6), the required level of scene illuminance

*I*

_{o}can be calculated through the equation

*I*

_{0}=

*I*=

*α n h c*)/(

*z*

^{2}Δ

*t λ*), where

*α*is a conversion factor (

*α*≈6.68×10

^{14}for conversion between J/μm

^{2}/s to Lux),

*h*the Planck’s constant,

*c*the celerity,

*z*the pixel size (in microns), Δ

*t*the integration time (or exposure time) of the CMOS sensor. The number of photons

*n*is itself a function of the SNR, Δ

*t*, and

*η*. The relative

*θ*

_{max},

*r*, and

*λ*.

*I*

_{o}as a function of the exposure time Δ

*t*of the CMOS sensor. This estimation assumes that all photons in the visible range have roughly the same energy (e.g.,

*λ*≈550nm), a typical pixel size of

*z*

^{2}=7.5×7.5μm

^{2}(i.e., 1/3” sensor with 640*480 pixels resolution), an average external quantum efficiency

*η*=37% [27

27. B. Fowler, A.E. Gamal, D. Yang, and H. Tian, “A method for estimating quantum efficiency for CMOS image sensors,” in *Solid State Sensor Arrays - Development and Applications*, Proc. SPIE **3301**, 178–185 (1998). [CrossRef]

*θ*of rays’ incidence, like observed in [28

28. P.B. Catrysse and B.A. Wandell, “Optical efficiency of image sensor pixels,” J. Opt. Soc. Am. A **19**, (2002). [CrossRef]

*θ*, the minimal illuminance

*I*

_{o}of objects to be observed in a scene decreases monotonically as the angle

*Ө*

_{max}increases, until a minimum is reached at

*Ө*

_{max}= 45 degrees where

*f*equals

*r*. Nevertheless the maximum acceptable angular field may depend on the use of integrated micro-lens arrays and the sensor’s ability to tolerate or correct a large variation in exposure between the centre and the edge of the of the image plane. Figure 3 shows also that it is possible to capture high-contrasted images in an indoor environment (cf. typical level of illuminance in living room is within the range of 50 to 200 Lux) with SNR greater than 24dB by increasing the exposure time (e.g., Δ

*t*constrained to 20ms to avoid motion blur in the image).

*r*. A simple technique to overcome this limitation consists of an inverted glass hemisphere attached underneath the pinhole [28

28. P.B. Catrysse and B.A. Wandell, “Optical efficiency of image sensor pixels,” J. Opt. Soc. Am. A **19**, (2002). [CrossRef]

### 3.3 Limits to motion estimation

*δx*≈

*λ*.

*(f/Ø)*which creates a blur of angular width

*δϕ*

_{diff}≈ ½

*FOV.(δx/r)*, and (ii) the angular resolution

*δϕ*

_{res}≈ ½

*FOV.(z/r)*, where

*z*is the pixel pitch. The three other features are environmental: (iii) the amount of light available to the receptors (already previously discussed), (iv) the contrast of stimulus, and (v) the micro-UAV’s motion. In our design, an optimal pixel size is not guaranteed since the two angular limits can be slightly different. An approximation of the combined effect of

*δϕ*

_{diff}and

*δϕ*

_{res}(assumed Gaussian in profile:

*Point Spread Function*, PSF) is obtained by calculating the acceptance angle

*δρ*

^{2}=

*I*=

*I*

_{0}×(

*1*+

*C*) to

*I*=

*I*

_{0}×(

*1*-

*C*), where

*I*

_{0}is the mean irradiance and

*C*the contrast parameter. If we express photocell gain as the transduction from contrast to voltage, then a single photodetector element (with its optical axis aligned to both PSF and edge) that moves by an angle

*δӨ*will see a change in contrast of roughly [31]

*δӨ*at which the contrast signal to contrast noise ratio (SN

_{c}R) for

*M*≈

*∏*.

*(r/δx)*

^{2}transported pixels crosses ~24dB (cf. results in Fig. 3) using the approximation [32]

*n̄*is the average arrival of photons to be transduced per photocell (similar to

*quantum bumps rate*). This rough calculation gives

*δӨ*

_{min}≈ 0.01° for conditions typical of realistic experiment:

*λ*≈550nm,

*f*=

*r*= 1.8mm,

*z*= 7.5μm,

*C*= 0.1,

*M*≈ 10

^{5}and

*n̄*×Δ

*t*≈ 2130 photons (room light conditions and Δ

*t*=20ms). This estimate leads to detectable angular velocity as low as

*δӨ*/

*δt*≈0.5 °/s. However, due to the

*long*integration time

*Δt*of photoreceptors, the proposed eye is expected to be sensitive to temporal modulations up to 50Hz only. As a consequence, motion blur will occur in the digital image at angular velocities greater than

*δϕ*

_{res}/Δ

*t*≈ 19 °/s. This upper limit shows that the only way that a micro-UAV achieves higher speed maneuvers using our hemispherical eye for self-motion estimation is to adapt image resolution according to angular velocity.

### 3.4 Experiments

*OmniVision*[30] which features 4.69×3.54mm

^{2}image area, 510×492 pixels resolution, 9.2 ×7.2μm

^{2}pixel size and 16ms maximal exposure time. The specifications of the optical components are listed below in Table 1. The precision optical aperture disc is placed at approximately 1.8mm away from the image plane.

18. S. Srinivasan and R. Chellappa, “Noise-resilient estimation of optical flow by use of overlapped basis functions,” J. Opt. Soc. Am. A **16**, 493–507 (1999). [CrossRef]

*T*

_{y}along

*Y*-axis and rotation

*R*

_{z}about

*Z*-axis. Indeed, the magnitude of the motion fields computed for

*R*

_{z}decreases slowly as the elevation angle

*ϕ*increases, whereas the one computed for

*T*

_{y}is more homogenously distributed over the entire FOV.

## 4. Conclusion and future work

## Acknowledgments

## References and links-

1. | D.V. Wick, T. Martinez, S.R. Restaino, and B.R. Stone, “Foveated imaging demonstration,” Opt. Express |

2. | R. Volkel, M. Eisner, and K.J. Weible, “Miniaturized imaging system,” J. Microelectronic Engineering, Elsevier Science , |

3. | J. Neumann, C. Fermuller, and Y. Aloimonos, “Eyes from eyes: new cameras for structure from motion,” in |

4. | T. Netter and N. Franceschini, “A robotic aircraft that follows terrain using a neuro-morphic eye,” in |

5. | K. Hoshino, F. Mura, H. Morii, K. Suematsu, and I. Shimoyama, “A small-sized panoramic scanning visual sensor inspired by the fly’s compound eye,” in |

6. | R. Hornsey, P. Thomas, W. Wong, S. Pepic, K. Yip, and R. Krishnasamy, “Electronic compound eye image sensor: construction and calibration,” in |

7. | J. Neumann, C. Fermuller, Y. Aloimonos, and V. Brajovic, “Compound eye sensor for 3D ego motion estimation,” in |

8. | J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. |

9. | J. Kim, K.H. Jeong, and L.P. Lee, “Artificial ommatidia by self-aligned microlenses and waveguides,” Opt. Express |

10. | J. Gluckman and S.K. Nayar, “Egomotion and omnidirectional cameras,” in |

11. | P. Baker, R. Pless, C. Fermuller, and Y. Aloimonos, “New eyes for shape and motion estimation,” in |

12. | J.J. Koenderink and A.J. Van Doorn, “Facts on optic flow,” J. Biol. Cybern. |

13. | T. Tian, C. Tomasi, and D. Heeger, “Comparison of approaches to egomotion estimation,” in |

14. | M. Franz, J. Chahl, and H. Krapp, “Insect-inspired estimation of egomotion,” J. Neural Computation |

15. | G. Adiv, “Inherent ambiguities in recovering 3D motion and structure from a noisy field,” IEEE Trans. Pattern Anal. Mach. Intell. |

16. | C. Fermuller and Y. Aloimonos, “Observability of 3D motion,”
J. Computer Vision, Springer Science eds., |

17. | J. Neumann, “Computer vision in the space if light rays: plenoptic video geometry and polydioptric camera design,” Dissertation for the degree of Doctor of Philosophy, Department of Computer Science, University of Maryland (2004). |

18. | S. Srinivasan and R. Chellappa, “Noise-resilient estimation of optical flow by use of overlapped basis functions,” J. Opt. Soc. Am. A |

19. | A. Bruhn, J. Weickert, and C. Schnorr, “Combining the advantages of local and global optic flow methods,” in DAGM Proceedings of Symposium on Pattern Recognition, (2002), 457–462 |

20. | C. Fermuller, Y. Aloimonos, P. Baker, R. Pless, J. Neumann, and B. Stuart, “Multi-camera networks: eyes from eyes,” in |

21. | T. Martinez, D.V. Wick, and S.R. Restaino, “Foveated, wide field-of-view imaging system using a liquid crystal spatial light modulator,” Opt. Express |

22. | J. Beckstead and S. Nordhauser, “360 degree/forward view integral imaging system,” US Patent 6028719, InterScience Inc., 22 February 2000. |

23. | R. Constantini and S. Susstrunk, “Virtual sensor design,” in |

24. | T.H. Nilsson, “Incident photometry: specifying stimuli for vision and light detectors,” Appl. Opt. |

25. | M. Young, “Pinhole optics,” Appl. Opt. |

26. | K.D. Mielenz, “On the diffraction limit for lensless imaging,” J. Nat. Inst. Stand. Tech. |

27. | B. Fowler, A.E. Gamal, D. Yang, and H. Tian, “A method for estimating quantum efficiency for CMOS image sensors,” in |

28. | P.B. Catrysse and B.A. Wandell, “Optical efficiency of image sensor pixels,” J. Opt. Soc. Am. A |

29. | J.M. Franke, “Field-widened pinhole camera,” Appl. Opt. |

30. | |

31. | R. Steveninck and W. Bialek, “Timing and counting precision in the blowfly visual system,” in |

32. | W. Bialek. “Thinking about the brain,” in |

**OCIS Codes**

(100.2000) Image processing : Digital image processing

(110.2990) Imaging systems : Image formation theory

(130.6010) Integrated optics : Sensors

(150.0150) Machine vision : Machine vision

**ToC Category:**

Research Papers

**History**

Original Manuscript: June 17, 2005

Revised Manuscript: July 23, 2005

Published: August 8, 2005

**Citation**

Christel-Loic Tisse, "Low-cost miniature wide-angle imaging for self-motion estimation," Opt. Express **13**, 6061-6072 (2005)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-16-6061

Sort: Journal | Reset

### References

- D.V. Wick, T. Martinez, S.R. Restaino and B.R. Stone, �??Foveated imaging demonstration,�?? Opt. Express 10, 60-65 (2002). [PubMed]
- R. Volkel, M. Eisner and K.J. Weible, �??Miniaturized imaging system,�?? J. Microelectronic Engineering, Elsevier Science, 67-68, 461-472 (2003). [CrossRef]
- J. Neumann, C. Fermuller and Y. Aloimonos, �??Eyes from eyes: new cameras for structure from motion,�?? in IEEE Proceedings of Third Workshop on Omnidirectional Vision (Copenhagen, Denmark, 2002), 19-26. [CrossRef]
- T. Netter and N. Franceschini, �??A robotic aircraft that follows terrain using a neuro-morphic eye,�?? in IEEE Proceedings of Conference on Intelligent Robots and Systems (Lausanne, Switzerland, 2002), 129-134. [CrossRef]
- K. Hoshino, F. Mura, H. Morii, K. Suematsu, and I. Shimoyama, �??A small-sized panoramic scanning visual sensor inspired by the fly�??s compound eye,�?? in IEEE Proceedings of Conference on Robotics and Automation (ICRA, Leuven, Belgium, 1998), 1641-1646.
- R. Hornsey, P. Thomas, W. Wong, S. Pepic, K. Yip and R. Krishnasamy, �??Electronic compound eye image sensor: construction and calibration,�?? in Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications V, M.MBlouke, N.Sampat, R.Motta, eds., Proc. SPIE 5301, 13-24 (San Jose, US Calif., 2004). [CrossRef]
- J. Neumann, C. Fermuller, Y. Aloimonos, and V. Brajovic, �??Compound eye sensor for 3D ego motion estimation,�?? in IEEE Proceedings of Conference on Intelligent Robots and Systems (Sendai, Japan, 2004).
- J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki and Y.Ichioka, �??Thin observation module by bound optics (TOMBO): concept and experimental verification,�?? Appl. Opt. 40 (11), 1806-1813 (2001). [CrossRef]
- J. Kim, K.H. Jeong and L.P. Lee, �??Artificial ommatidia by self-aligned microlenses and waveguides,�?? Opt. Express 30, 5-7 (2005)
- J. Gluckman and S.K. Nayar, �??Egomotion and omnidirectional cameras,�?? in IEEE Proceedings of Conference on Computer Vision and Pattern Recognition (Bombay, India, 1998), 999-1005.
- P. Baker, R. Pless, C. Fermuller and Y. Aloimonos, �??New eyes for shape and motion estimation,�?? in IEEE Proceedings of the first international Workshop on Biologically Motivated Computer Vision, Lectures Notes in Computer Science 1811, Springer-Verlag eds. (2000), 118-128. [CrossRef]
- J.J. Koenderink and A.J. Van Doorn, �??Facts on optic flow,�?? J. Biol. Cybern. 56, Springer-Verlag eds. (1987), 247-254. [CrossRef]
- T. Tian, C. Tomasi and D. Heeger, �??Comparison of approaches to egomotion estimation,�?? in IEEE Proceedings of Conference on Computer Vision and Pattern Recognition (San Francisco, US Calif., 1996), 315-320.
- M. Franz, J. Chahl and H. Krapp, �??Insect-inspired estimation of egomotion,�?? J. Neural Computation 16, 2245-2260 (2004). [CrossRef]
- G.Adiv, �??Inherent ambiguities in recovering 3D motion and structure from a noisy field,�?? IEEE Trans. Pattern Anal. Mach. Intell. 11 (5), 477-489 (1989). [CrossRef]
- C.Fermuller and Y.Aloimonos, �??Observability of 3D motion,�?? J. Computer Vision, Springer Science eds., 37 (1), 46-63 (2000).
- J.Neumann, �??Computer vision in the space if light rays: plenoptic video geometry and polydioptric camera design,�?? Dissertation for the degree of Doctor of Philosophy, Department of Computer Science, University of Maryland (2004).
- S.Srinivasan and R.Chellappa, �??Noise-resilient estimation of optical flow by use of overlapped basis functions,�?? J. Opt. Soc. Am. A 16, 493-507 (1999). [CrossRef]
- A.Bruhn, J.Weickert and C.Schnorr, �??Combining the advantages of local and global optic flow methods,�?? in DAGM Proceedings of Symposium on Pattern Recognition, (2002), 457-462
- C.Fermuller, Y.Aloimonos, P.Baker, R.Pless. J.Neumann and B.Stuart, �??Multi-camera networks: eyes from eyes,�?? in IEEE Proceedings of Workshop on Omnidirectional Vision, (2000), 11-18.
- T.Martinez, D.V.Wick and S.R.Restaino, �??Foveated, wide field-of-view imaging system using a liquid crystal spatial light modulator,�?? Opt. Express 8 (10), 555-560 (2001). [CrossRef] [PubMed]
- J.Beckstead and S.Nordhauser, �??360 degree/forward view integral imaging system,�?? US Patent 6028719, InterScience Inc., 22 February 2000.
- R.Constantini and S.Susstrunk, �??Virtual sensor design,�?? in Sensors and Camera Systems for Scientific -Industrial and Digital Photography Applications, Proc. SPIE 5301, 408-419 (2004). [CrossRef]
- T.H. Nilsson, �??Incident photometry: specifying stimuli for vision and light detectors,�?? Appl. Opt. 22, 3457-3464 (1983). [CrossRef] [PubMed]
- M. Young, �??Pinhole optics,�?? Appl. Opt. 10, 2763-2767 (1971). [CrossRef] [PubMed]
- K.D. Mielenz, �??On the diffraction limit for lensless imaging,�?? J. Nat. Inst. Stand. Tech. 104, (1999).
- B. Fowler, A.E. Gamal, D. Yang and H. Tian, �??A method for estimating quantum efficiency for CMOS image sensors,�?? in Solid State Sensor Arrays �?? Development and Applications, Proc. SPIE 3301, 178-185 (1998). [CrossRef]
- P.B. Catrysse and B.A. Wandell, �??Optical efficiency of image sensor pixels,�?? J. Opt. Soc. Am. A 19, (2002). [CrossRef]
- J.M. Franke, �??Field-widened pinhole camera,�?? Appl. Opt. 18, 1979. [CrossRef] [PubMed]
- <a href= "http://www.ovt.com">http://www.ovt.com</a>
- R. Steveninck and W. Bialek, �??Timing and counting precision in the blowfly visual system,�?? in Methods in Neural Networks IV, J.Van Hemmen, J.D.Cowan and E.Domany, ed., (Heidelberg; Springer-Verlag, 2001), 313-371.
- W. Bialek. �??Thinking about the brain,�?? in Les Houches Lectures on Physics of bio-molecules and cells, H.Flyvbjerg, F.Jülicher, P.Ormos and F.David, ed., (Les Ulis, France; Springer-Verlag, 2002), 485-577.

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.