OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editor: Gregory W. Faris
  • Vol. 2, Iss. 1 — Jan. 19, 2007
« Show journal navigation

Artificial compound eye applying hyperacuity

Andreas Brückner, Jacques Duparré, Andreas Bräuer, and Andreas Tünnermann  »View Author Affiliations


Optics Express, Vol. 14, Issue 25, pp. 12076-12084 (2006)
http://dx.doi.org/10.1364/OE.14.012076


View Full Text Article

Acrobat PDF (297 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Inspired by the natural phenomenon of hyperacuity, redundant sampling in combination with the knowledge about the impulse response of the imaging system is used to extract highly accurate information using a low resolving artificial apposition compound eye. Thus the implementation of a precise position detection for simple objects like point sources and edges is described.

© 2006 Optical Society of America

1. Introduction

Inspired by the principles of insect vision, artificial compound eye cameras are reaching the limits in the miniaturization of imaging systems [1

1. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Artificial apposition compound eye fabricated by micro-optics technology,” Appl. Opt. 43, 4303–4310 (2004). [CrossRef] [PubMed]

, 2

2. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005). [CrossRef] [PubMed]

]. For example, the artificial apposition compound eye consists of a microlens array on a planar glass substrate with a thickness of less than 500µm. In combination with an optoelectronic sensor array this forms an ultra-thin imaging camera with the potential to a large field-of-view (FOV), low volume and weight. However, due to the reduction of system size, the diameter of each lenslet is small and the number of image details that can be transferred by the system decreases [3

3. R. Völkel, M. Eisner, and K. J. Weible, “Miniaturized imaging systems,” Microelectron. Eng. 67–68, 461–472 (2003). [CrossRef]

]. As a result the image resolution is limited to about ten thousand pixels.

Confronted with the same problem, nature developed a strategy to obtain infor-mation with sub-pixel accuracy. A number of insects are able to detect motion within a fraction of their photoreceptor diameter [4

4. K. Nakayama, “Biological image motion processing: a review,” Vision Res. 25, 625–660 (1985). [CrossRef] [PubMed]

]. This so-called hyperacuity is caused by a dense sampling of the object space connected with image segmentation, informa-tion pooling and parallel signal processing [5–7

5. M. J. Wilcox and D. C. Jr. Thelen, “A Retina with Parallel Input and Pulsed Output, Extracting High-Resolution Information,” IEEE Trans. Neural Net. 10, 574–583 (1999). [CrossRef]

]. Though hyperacuity enables insects to a perception of motion with an amplitude smaller than the resolution limit of their compound eyes, it does not enable them to resolve adjacent image details beyond this limit [8

8. G. Westheimer, “Diffraction Theory and Visual Hyperacuity,” Am. J. Optom. Physiol. Opt. 53, 362–364 (1976). [CrossRef] [PubMed]

].

In machine and robot vision a low resolution optical sensor for position detection is advantageous in terms of speed, computational load and energy consumption [9

9. R. A. Young, “Bridging the gap between vision and commercial applications,” in Human Vision, Visual Processing and Digital Display VI, B. E. Rogowitz and J. P. Allebach, eds., Proc. SPIE2411, 2–14 (1995). [CrossRef]

]. Furthermore, often there exists some a priori knowledge about the object to be located in the image. Adopting some features of natural hyperacuity, position detection sensors with sub-pixel accuracy can be achieved by applying low resolution imaging systems. Various already demonstrated sensors of this kind are using either a scanning regime or overlapping FOVs of three adjacent optical channels to increase the sampling density in object space [10–13

10. S. Viollet and N. Franceschini, “Visual servo system based on a biologically-inspired scanning sensor,” in Sensor Fusion and Decentralized Control in Robotic Systems II, G. T. McKee and P. S. Schenker, eds., Proc. SPIE3839, 144–155 (1999). [CrossRef]

]. Due to the scanning drive, a robust mechanical construction and high system complexity pose the main problems to the miniaturization of these sensors. Another disadvantage is that hyperacuity in scanning sensors is limited to one dimension. Sensors with overlapping FOVs suffer from alignment problems and rapidly increasing volume when individual modules are combined to achieve a two-dimensionl FOV.

This article demonstrates that several of these drawbacks can be overcome by using artificial apposition compound eyes in combination with hyperacuity methods. There is no need for scanning because many thousands of optical channels are imaging in parallel. The increase in accuracy is achieved across the whole two dimensional FOV of the camera by letting the FOVs of adjacent optical channels overlap. Furthermore, as artificial apposition compound eyes are imaging systems, several objects can be tracked at once provided that their images are separated. At the same time also object recognition is feasible.

In section 2 we present a linear model of the imaging process in artificial apposition compound eyes which is later used to derive an analytical and unique relationship between measured powers and the position of objects (section 3). Finally, the experimental verification of locating objects with increased accuracy will be presented in section 4.

2. Imaging model for artificial apposition compound eyes

Artificial apposition compound eyes consist of a microlens array (lens diameter D, focal length f, pitch pL) replicated on top of a thin glass substrate (Fig. 1). Contrasting to their natural counterparts they are fabricated on a planar basis rather than on a curved surface because of today’s micro-electronics fabrication technology. An optoelectronic sensor array with a different pitch pK is placed in the focal plane of the microlenses to pickup the image. Furthermore, the size of the sensor pixels is narrowed down by a pinhole array on the substrate backside in order to increase resolution. The pitch difference Δp=pL-pK enables different viewing directions of the individual optical channels. The short focal length of the lenslets leads to nearly unlimited depth of field meaning that for object distances larger than ten times the focal length (e.g. approx. 2mm) the image will remain sharp within the focal plane.

Fig. 1. Schematic section of an artificial apposition compound eye with a system length L. Δϕ is the angle between two adjacent optical axes (sampling angle) and Δφ the acceptance angle which is a measure for the smallest resolvable feature size.

As shown in Fig. 1 each channel contributes to one image point by collecting light from a finite angle Δφ given by the full width at half maximum (FWHM) of the Airy diffraction pattern convolved with the diameter of the pinhole d. The result of this convolution is projected with the focal length f which gives the angular sensitivity function (ASF in Fig. 2). The ASF describes the efficiency of the intensity transfer for an object point as a function of its angular distance φ to the optical axis of the channel.

Fig. 2. Origin of the angular sensitivity function (ASF) of one channel and its FWHM, the acceptance angle.

We now use a linear imaging model of one channel with the assumptions of incoherent illumination and space invariance. For an extended object the intensity distribution in the focal plane I(x, y) can then be written as the convolution of the intensity of the geometric image O with the impulse response R of the lens

I(x,y)=R(xξ,yη)·O(ξ,η)dξdη.
(1)

R(x-ξ, y-η) is the response of the optical system at (x, y) in the image plane to an impulse at (ξ, η) in the object plane. Solving the convolution integral analytically results in an intricate term for the ASF. For this reason it is appropriate to use a Gaussian approximation that was derived for natural compound eyes [14

14. K. G. Götz, “Die optischen Übertragungseigenschaften der Komplexaugen von Drosophila,” Kybernetik 2, 215–221 (1965). [CrossRef] [PubMed]

]

ASF(φ)exp[4·ln2·(φΔφ)2].
(2)

This approximation is valid as long as the pinhole diameter and the FWHM of the Airy pattern are nearly of the same size. The FWHM of the Gaussian ASF is then given by [15

15. A. W. Snyder, “Acuity of compound eyes: Physical limitations and design,” J. Comp. Physiol. A 116, 161–182 (1977). [CrossRef]

]

Δφ=(λD)2+(df)2.
(3)

The so-called acceptance angle Δφ has a geometrical contribution Δρ=d/f which is the pinhole diameter projected into object space and a second contribution Δδ=λ/D determined by diffraction at the aperture of the microlens. Equation (3) reveals an important trade-off for artificial apposition compound eyes: As Δφ approximates the smallest resolvable feature size, increasing resolution for a given F-number means that the pinhole diameter d has to be decreased but then sensitivity decreases with d2.

3. Implementation of high accuracy position detection

It has been pointed out before that the acceptance angle approximates the smallest resolvable angle between two image details i.e. the angular cut-off frequency of the lens is νCO≈1/Δφ. Following the sampling theorem the sampling frequency νS=1/Δϕ has to be at least twice as large as the optical cut-off to exploit the resolution potential of the eye. In case of infinitely narrow (i.e. δ-shaped) sampling points there is no information gained when using a higher sampling frequency. However, in artificial apposition compound eyes the object space is sampled by the ASFs of the different optical channels which exhibit a finite angular width. In this case the overlap between sampling profiles enables the extraction of sub-pixel information when the profile of the ASF is known.

For example a point source at an angular distance φP from the optical axis of one channel causes different intensities within the focal planes of adjacent channels (Fig. 3(a)). The individual intensities depend on (I) the distance to the optical axis, (II) the absolute irradiance of the source and (III) other unknown parameters e.g. the transmission of the optical system. By modeling the point source as a δ-distribution the intensity in the k’th channel is proportional to the value of its ASF at the location of the source

Ik(φ)=c·ASFk(φP).
(4)

Fig. 3. (a): Efficiency of intensity transfer for a point source at φp in three adjacent channels with overlapping FOVs. (b): Position of a point source within the projected image plane. For a given intensity the position lies on a circle with radius rk around the optical axis (•) of the k’th channel.

At this point the ratio of the intensities is equal to the ratio of the measured powers α in two adjacent pinholes because the area of the pinholes is constant. We substitute Eq. (2) in Eq. (4) and consider the offset between adjacent optical axes (see Fig. 3(a)). In Eq. (5) this is done for the center channel (Index 0) and its adjacent channel along the x axis (Index 1) for demonstration

αx=I1(φ1)I0(φ0)exp[4ln(2)Δφ2·(φ12φ02)].
(5)

The angles φ 0 and φ 1 correspond to the angular position of the point source with respect to the optical axis of the center channel and its neighbor respectively. Within the projected image plane, the distance between the position of the point source and the center of one channel is given by

r0=x02+y02
(6)
r1=x12+y12=(x0Δp)2+y02,
(7)

for the two adjacent channels used here (see Fig. 3(b) for reference). Now, we apply a linear approximation for the tangent because φ is small for each channel

φ=arctan(rf)rf.
(8)

Together with Eq. (6) and (7) this is substiuted into (5) which leads to the more simple expression

αxexp{s·[Δp22Δp·x0]},
(9)

wherein s is defined by

s=4·ln(2)(Δφ·f)2.
(10)

x0=ln(αx)2·s·Δp+Δp2.
(11)

The second ratio in perpendicular direction is needed to find the other coordinate y0 with an equation that is analog to Eq. 11. The radial position r 0 within the projected image plane and therefore the angular distance φP to the optical axis of one channel are calculated from these with the help of Eq. (6) and (8).

In case of the detection of an edge position there are two crucial coordinates: (I) the orientation angle ϑK to the pixel matrix and (II) the normal distance rk between the optical axis of the k’th channel and the edge (Fig. 4). We will mainly be dealing with the second one, assured that the orientation angle can be found with sufficient accuracy by using a standard edge detection filter (e.g. Sobel type) even in low resolution images.

Fig. 4. The difference of the normal distances of the edge to the optical axis of each channel (•) is given by the pitch difference Δp and the orientation angle ϑK. Dashed box: The intensity in the image plane of one channel on a path normal to the edge.

Because the edge extends infinitely compared to the diameter of the radial symmetrical ASF, the intensity in the k’th channel is proportional to the one-dimensional convolution of the impulse response R with the edge profile in direction normal to the edge (see Fig. 4)

Ik(r)const·R(rrk)·Q0·Θ(rk)drk+B.
(12)

In Eq. (12) Θ is the Heaviside step function describing the geometrical image of an edge with total irradiance Q 0 on a constant background B. Using Eq. (2) with φr/f for small angles yields the intensity distribution in the k’th channel

Ik(r)const·12πs[erf(s·r)+1]+B,
(13)

wherein the definition of the error function erf(x) is used

erf(x)=2π·0xexp{t2}dt.
(14)

In this case calculating the intensity ratio for two adjacent channels would not help much because of the complex structure of the error function as well as the unknown background illumination. So, the ratio is calculated for the derivatives of the measured powers in adjacent channels in x (x) and y (y). The resulting relationship is similar to that of the point source (Eq. (11)) except for its additional dependence on the orientation angle ϑK:

r0=±[ln(α~x)2s·Δp·cosϑk+Δp·cosϑk2].
(15)

The ± stands for a rising or falling edge respectively. An analog relation follows for adjacent channels on the y axis wherein the cosine is replaced by a sine function.

Other objects like rectangles or triangles can also be treated as long as they consist of edges that can be distinguished in the image and therefore segmented.

4. Experimental verification and results

To verify the proposed methods we used an experimental setup that is shown in Fig. 5. The artificial apposition compound eye and a microscope objective, needed to relay the pinhole layer onto a CCD, are fixed on a rotary stage. During the measurement the stage is rotated stepwise about the z-axis simulating object movement through the FOV.

Fig. 5. Experimental setup for position detection with hyperacuity. A point source at infinity (a) or an edge (b) is used as object within the FOV of the artificial apposition compound eye (APCO). The edge is illuminated with homogenized white light from a RGB LED source. The illuminated pinholes on the backside of the artificial apposition compound eye are relayed onto a CCD. Rotating the ensemble of the artificial apposition compound eye, microscope objective (MO) and CCD camera simulates object movement.

For each step of the stage orientation (ϕref) the power within the pinholes is measured. The position of the point source or edge is then calculated from the powers of adjacent pinholes using Eq. (11) and Eq. (15) respectively. Afterwards, the measured angular distance between two positions (Δϕm) is compared with the change of the stage orientation angle (Δϕref) which gives the error or acuity (δa) of the method (see Fig. 6(a)). For each measurement, the artificial compound eye is tilted with respect to the path of the point source or edge. Therefore the path of the object is linear but, in general, not parallel to the x or y axis of the image plane.

The accuracies that have been measured with different artificial apposition compound eyes for either a point source or an edge are listed in Tab. 1.

For comparison, these accuracies are also given in percent of the resolution limit of the artificial apposition compound eye.

Fig. 6. (a): Example of measured angular distances Δϕm (blue dots) compared with reference values Δϕref (red line) for an artificial apposition compound eye with 64×64 channels imaging an edge. The difference of both gives the acuity δa of the measurement (black). The acceptance angle is Δϕ=0.9°. (b): Overlap of the high accuracy zones in three adjacent channels. The measured accuracies are shown in relation to the angular distance to the optical axis of one channel.

Table 1. Experimental results: Measured accuracies for different artificial compound eyes in relation to the acceptance angle that is corresponding to a resolu- tion of one line pair (LP). The pinhole diameters d are 2,3 and 4 µm.

table-icon
View This Table

The listed values represent the best and the worst cases from several measurements. The large variation of these values is mainly due to systematic deviations from the used imaging model caused by deviations of the shape and size of the pinholes. In this case neighboring channels exhibit different responses for the same stimulus. But it has to be emphasized that the measured accuracy was reproducible within the limits which are determined by the signal-to-noise ratio (SNR) when the same artificial compound eye was used. As seen in Fig. 7, the measured maximum accuracy is proportional to the SNR. It should be noted that the accuracies of Tab. 1 are only valid in a finite part of the FOV of each channel with an extent of less than 1°. However, it was found that these zones overlap between adjacent channels (Fig. 6(b)). Hence, the increased accuracy is achieved across the whole FOV of the artificial apposition compound eye which extends over 25°×25°. Though the systematic deviation increases at the outer borders of the overall FOV due to the space variance of the ASF caused by off-axis aberrations, the effect can be neglected for small FOVs. Alternatively, the off- axis aberrations can be corrected by using a chirped array of ellipsoidal microlenses [16

16. J. Duparré, F. Wippermann, P. Dannberg, and A. Reimann, “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence,” Opt. Express 13, 10539–10551 (2005). [CrossRef] [PubMed]

].

Fig. 7. Measured maximum accuracy in degrees as a function of the signal-to-noise ratio (SNR).

5. Conclusion and outlook

The experimentally demonstrated methods of hyperacuity yield a new approach to access highly accurate information with an artificial apposition compound eye despite the number of image pixels is small. To achieve this, an overlap between the FOVs of adjacent optical channels as well as knowledge about the imaging process is used to derive an analytical and unique relationship between measured powers and the position of edges or point objects. For example, position information was extracted from an image containing 50×50 pixels with a fidelity that equals 500×500 effective pixels. Accuracies up to a factor of 50 compared to the smallest resolvable feature size have been achieved. Furthermore, the measured position is independent of the absolute irradiation of the source, the object distance and background illumination. However, the highest achievable accuracy is limited by the SNR in the image. During the experimental verification it was shown that though the zone of high accuracy within each channel is limited, the zones of each adjacent pair out of the total number of up to 80×60 channels overlap. Therefore the high accuracy can be obtained across the whole two-dimensional FOV of 25°×25°. In order to minimize the variations of the results either a better homogeneity of the artificial apposition compound eye parameters has to be achieved or a calibration of the imaging model for the individual lens may be applied. A third option employs a method that is widely independent of the precise imaging model.

Three major conclusions for the performance of the position detection result from the experimental investigations: (I) Large overlap between adjacent ASFs causes a dense sampling of the object space which improves hyperacuity but leads to a small overall FOV for a given number of channels. (II) Increasing sensitivity also improves the maximum accuracy that can be obtained. (III) The SNR states the final limitation for the maximum accuracy. Future work on a design of a new kind of artificial compound eyes inspired by the neural superposition eye [17

17. K. Kirschfeld and N. Franceschini, “Optische Eigenschaften der Ommatidien im Komplexauge von Musca,” Kybernetik 5, 47–52 (1968). [CrossRef] [PubMed]

] will address these points.

A long-term task is identified in combining the principles of artificial apposition compound eyes with on-chip parallel, analog pre-processing (e.g. by smart pixels). This would lead to the fabrication of ultra-thin imaging sensors with the ability for hyperacuity without the need for off-chip processing by a PC.

References and links

1.

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Artificial apposition compound eye fabricated by micro-optics technology,” Appl. Opt. 43, 4303–4310 (2004). [CrossRef] [PubMed]

2.

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005). [CrossRef] [PubMed]

3.

R. Völkel, M. Eisner, and K. J. Weible, “Miniaturized imaging systems,” Microelectron. Eng. 67–68, 461–472 (2003). [CrossRef]

4.

K. Nakayama, “Biological image motion processing: a review,” Vision Res. 25, 625–660 (1985). [CrossRef] [PubMed]

5.

M. J. Wilcox and D. C. Jr. Thelen, “A Retina with Parallel Input and Pulsed Output, Extracting High-Resolution Information,” IEEE Trans. Neural Net. 10, 574–583 (1999). [CrossRef]

6.

S. B. Laughlin, “Form and function in retinal processing,” TINS 10, 478–483 (1987).

7.

J. S. Sanders and C. E. Halford, “Design and analysis of apposition compound eye optical sensors,” Opt. Eng. 34, 222–235 (1995). [CrossRef]

8.

G. Westheimer, “Diffraction Theory and Visual Hyperacuity,” Am. J. Optom. Physiol. Opt. 53, 362–364 (1976). [CrossRef] [PubMed]

9.

R. A. Young, “Bridging the gap between vision and commercial applications,” in Human Vision, Visual Processing and Digital Display VI, B. E. Rogowitz and J. P. Allebach, eds., Proc. SPIE2411, 2–14 (1995). [CrossRef]

10.

S. Viollet and N. Franceschini, “Visual servo system based on a biologically-inspired scanning sensor,” in Sensor Fusion and Decentralized Control in Robotic Systems II, G. T. McKee and P. S. Schenker, eds., Proc. SPIE3839, 144–155 (1999). [CrossRef]

11.

K. Hoshino, F. Mura, and I. Shimoyama, “A one-chip scanning retina with an integrated microme-chanical scanning actuator,” J. Microelectromech. Syst. 10, 492–497 (2001). [CrossRef]

12.

M. S. Currin, P. Schonbaum, C. E. Halford, and R. G. Driggers,“Musca domestica inspired machine vision system with hyperacuity,” Opt. Eng. 34, 607–611 (1995). [CrossRef]

13.

D. T. Riley, W. M. Harman, E. Tomberlin, S. F. Barrett, M. Wilcox, and C. H. G. Wright, “Musca domestica inspired machine vision system with hyperacuity,” in Smart Structures and Materials 2005: Smart Sensor Technology and Measurement Systems, E. Udd and D. Inaudi, eds., Proc. SPIE5758, 304–320 (2005). [CrossRef]

14.

K. G. Götz, “Die optischen Übertragungseigenschaften der Komplexaugen von Drosophila,” Kybernetik 2, 215–221 (1965). [CrossRef] [PubMed]

15.

A. W. Snyder, “Acuity of compound eyes: Physical limitations and design,” J. Comp. Physiol. A 116, 161–182 (1977). [CrossRef]

16.

J. Duparré, F. Wippermann, P. Dannberg, and A. Reimann, “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence,” Opt. Express 13, 10539–10551 (2005). [CrossRef] [PubMed]

17.

K. Kirschfeld and N. Franceschini, “Optische Eigenschaften der Ommatidien im Komplexauge von Musca,” Kybernetik 5, 47–52 (1968). [CrossRef] [PubMed]

OCIS Codes
(040.1240) Detectors : Arrays
(110.0110) Imaging systems : Imaging systems
(150.0150) Machine vision : Machine vision
(330.1070) Vision, color, and visual optics : Vision - acuity
(330.1880) Vision, color, and visual optics : Detection
(350.3950) Other areas of optics : Micro-optics

ToC Category:
Imaging Systems

History
Original Manuscript: September 15, 2006
Revised Manuscript: November 29, 2006
Manuscript Accepted: December 1, 2006
Published: December 11, 2006

Virtual Issues
Vol. 2, Iss. 1 Virtual Journal for Biomedical Optics

Citation
Andreas Brückner, Jacques Duparré, Andreas Bräuer, and Andreas Tünnermann, "Artificial compound eye applying hyperacuity," Opt. Express 14, 12076-12084 (2006)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-14-25-12076


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J. Duparre, P. Dannberg, P. Schreiber, A. Brauer and A. Tunnermann, "Artificial apposition compound eye fabricated by micro-optics technology," Appl. Opt. 43, 4303-4310 (2004). [CrossRef] [PubMed]
  2. J. Duparre, P. Dannberg, P. Schreiber, A. Brauer and A. Tunnermann, "Thin compound-eye camera," Appl. Opt. 44, 2949-2956 (2005). [CrossRef] [PubMed]
  3. R. Volkel, M. Eisner and K. J. Weible, "Miniaturized imaging systems," Microelectron. Eng. 67-68, 461-472 (2003). [CrossRef]
  4. K. Nakayama, "Biological image motion processing: a review," Vision Res. 25, 625-660 (1985). [CrossRef] [PubMed]
  5. M. J. Wilcox and D. C. Jr. Thelen, "A Retina with Parallel Input and Pulsed Output, Extracting High-Resolution Information," IEEE Trans. Neural Net. 10, 574-583 (1999). [CrossRef]
  6. S. B. Laughlin, "Form and function in retinal processing," TINS 10, 478-483 (1987).
  7. J. S. Sanders and C. E. Halford, "Design and analysis of apposition compound eye optical sensors," Opt. Eng. 34, 222-235 (1995). [CrossRef]
  8. G. Westheimer, "Diffraction Theory and Visual Hyperacuity," Am. J. Optom. Physiol. Opt. 53, 362-364 (1976). [CrossRef] [PubMed]
  9. R. A. Young, "Bridging the gap between vision and commercial applications," in Human Vision, Visual Processing and Digital Display VI, B. E. Rogowitz and J. P. Allebach, eds., Proc. SPIE 2411, 2-14 (1995). [CrossRef]
  10. S. Viollet and N. Franceschini, "Visual servo system based on a biologically-inspired scanning sensor," in Sensor Fusion and Decentralized Control in Robotic Systems II, G. T. McKee and P. S. Schenker, eds., Proc. SPIE 3839, 144-155 (1999). [CrossRef]
  11. K. Hoshino, F. Mura and I. Shimoyama, "A one-chip scanning retina with an integrated micromechanical scanning actuator," J. Microelectromech. Syst. 10, 492-497 (2001). [CrossRef]
  12. M. S. Currin, P. Schonbaum, C. E. Halford and R. G. Driggers,"Musca domestica inspired machine vision system with hyperacuity," Opt. Eng. 34, 607-611 (1995). [CrossRef]
  13. D. T. Riley, W. M. Harman, E. Tomberlin, S. F. Barrett, M. Wilcox and C. H. G. Wright, "Musca domestica inspired machine vision system with hyperacuity," in Smart Structures and Materials 2005: Smart Sensor Technology and Measurement Systems, E. Udd and D. Inaudi, eds., Proc. SPIE 5758, 304-320 (2005). [CrossRef]
  14. K. G. Gotz, "Die optischen ¨Ubertragungseigenschaften der Komplexaugen von Drosophila," Kybernetik 2, 215-221 (1965). [CrossRef] [PubMed]
  15. A. W. Snyder, "Acuity of compound eyes: Physical limitations and design," J. Comp. Physiol. A 116, 161-182 (1977). [CrossRef]
  16. J. Duparre, F. Wippermann, P. Dannberg and A. Reimann, "Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence," Opt. Express 13, 10539- 10551 (2005). [CrossRef] [PubMed]
  17. K. Kirschfeld and N. Franceschini, "Optische Eigenschaften der Ommatidien im Komplexauge von Musca," Kybernetik 5, 47-52 (1968). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited