OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 5 — Feb. 28, 2011
  • pp: 4294–4300
« Show journal navigation

Optical imaging with phase-coded aperture

Wanli Chi and Nicholas George  »View Author Affiliations


Optics Express, Vol. 19, Issue 5, pp. 4294-4300 (2011)
http://dx.doi.org/10.1364/OE.19.004294


View Full Text Article

Acrobat PDF (1204 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Experimental results are shown for an integrated computational imaging system with a phase-coded aperture. A spatial light modulator works as a phase screen that diffracts light from a point object into a uniformly redundant array (URA). Excellent imaging results are achieved after correlation processing. The system has the same depth of field as a diffraction-limited lens. Potential applications are discussed.

© 2011 Optical Society of America

1. Introduction

The advances in computer speed and image processing make possible optical imaging without conventional lenses. In this paper we report experimental studies of a previously described incoherent imaging system without a focusing lens [1

1. W. Chi and N. George, “Phase-coded aperture for optical imaging,” Opt. Commun. 282, 2110–2117 (2009). [CrossRef]

]. In our approach the optical-coded-aperture imager consists of a phase plate followed by a detector array, as shown in Fig. 1. It represents a novel extension of the X-ray coded aperture system [2

2. R. H. Dicke, “Scatter-hole cameras for X-rays and Gamma rays,” Astrophys. J. 153, L101 (1968). [CrossRef]

4

4. R. G. Simpson and H. H. Barrett, “Coded aperture imaging,” in Imaging in Diagnositc Medicine, S. Nudel-man, (Ed.) (Plenum, 1980).

] to optical wavelengths, where we have applied phase retrieval concepts [5

5. F. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef] [PubMed]

, 6

6. J. C. Dainty and F. R. Fienup, “Phase retrieval and image reconstruction for astronomy,” in Image Recovery: Theory and Application, H. Stark, (Ed.) (Academic, 1987).

] to design the phase plate whose diffraction pattern for a point object is a bandlimited uniformly redundant array (bl-URA). Correlation processing is applied to the intermediate image on the detector array to recover the object. In Sec. 2, a general theory of linear integrated imaging is presented. Section 3 contains a description of the experimental setup; Section 4 has results for the coded-aperture system; and Section 5 includes conclusions and potential applications of this new imager.

Fig. 1 The experimental setup for the coded aperture imaging system: O, Object; BS, Beam Splitter; A, Aperture; SLM, Spatial light modulator; D, Detector array; BP, Blackened metal plate.

2. Linear system theory for integrated imaging

Consider a shift-invariant linear optical imaging system. The image i(x,y) can be expressed as a function of object o(x,y) and point spread function (PSF) h(x,y) in the following convolution form,
i(x,y)=o(ξ,η)h(xξ,yη)dξdη.
(1)

Herein, we would like to assert that Eq. (1) can be considered as a general form for optical imaging where h(x,y) can be any realizable function. More specifically, h(x,y) need not be a delta-like function for sharp imaging, as is seen below.

Assume there exists a linear shift invariant operator L{•} which, when applied to h(x,y), yields the following result,
L{h(x,y)}=fδ(x,y)+g(x,y),
(2)
where fδ(x,y) is a Dirac delta-like function such as an Airy disk function that is spatially separated from the function g(x,y). We can add a linear digital processing system by applying the optical image to the linear operator L{•}. The result is
L{i(x,y)}=o(ξ,η)fδ(xξ,yη)dξdη+o(ξ,η)g(xξ,yη)dξdη.
(3)

The first term in Eq. (3) is a recovered sharp image, and fδ(x,y) is the PSF of the overall system including both optics and image processing.

In the design process of an integrated imaging system, an important step is to find the function h(x,y) which (i) is realizable with optical system and (ii) can transform to a delta-like function under some linear operators. In application, the optical system generally has further constraints on optical material, size, number of elements etc. so the function must be further required to be realizable with the optical system satisfying these constraints.

The linear operator L{•} can take many forms. For example, it can be an identity operator, a differential operator, or a correlation operator.

The simplest example of integrated imaging is a diffraction limited lens with a full circular aperture, where h(x,y) is a delta-like Airy disk, and L{•} is an identity operator. Another example is an integrated system where the image is blurred by lens aberrations and a subsequent linear deconvolution algorithm is applied to the intermediate image to yield a sharp picture.

In an earlier paper [1

1. W. Chi and N. George, “Phase-coded aperture for optical imaging,” Opt. Commun. 282, 2110–2117 (2009). [CrossRef]

] we presented one more example to expand the linear imaging system design possibilities, in which h(x,y) is a bl-URA, and linear operator L{•} is a correlation operator. The main purpose of this article is to present experimental results of such a system which we call a phase-coded-aperture imaging system.

Briefly, the phase-coded-aperture optical imager consists of a phase plate followed by a detector array. For a point source, the diffraction pattern caused by the phase plate is the PSF of the optical system. It is a bl-URA calculated in the following manner:
h(x,y)=t(x,y)*b(x,y),
(4)
where * means convolution, t(x,y) is a URA and b(x,y) is a bandlimited function. A phase retrieval method is used to calculate the phase function that yields such a bl-URA. The linear operator is defined as
L{h(x,y)}=h(x,y)tR(x,y),
(5)
where ⊗ is a correlation operator and tR(x, y) is a repeated URA with mean removed, i.e.,
tR(x,y)=[t(xξ,yη)t]comb(ξ/Dx,η/Dy)dξdη,
(6)
in which Dx, Dy are the sizes of URA in X and Y directions, respectively; and t̄ is the mean value of URA t(x,y).

Combining Eqs. (4)(6) yields the following result,
L{h(x,y)}=Ccomb(x/Dx,y/Dy)*Λ(x/Δx,y/Δy)*b(x,y),
(7)
where Λ(•) is a triangle function defined as Λ(x) = max{1 – |x|,0} ; C is a constant with exact value determined by URA; and Δx, Δy are the pixel size of URA in X and Y directions, respectively. Equation (7) consists of an array of delta-like functions as
fδ(x,y)=Λ(x/Δx,y/Δy)*b(x,y),
(8)
in which a normalization constant is omitted.

So by correlating i(x,y) with tR(x, y), we can recover a sharp image with the overall PSF fδ(x,y) shown in Eq. (8).

3. Experimental setup

4. Experimental results

The linear operator processing steps are illustrated in Fig. 2. A point source located at a distance of 1275mm is imaged by the coded aperture system shown in Fig. 1. The intermediate image is cross correlated with a repeated URA and the central portion of the correlation picture is the recovery of the point source image.

Fig. 2 Illustration of the correlation image processing using a point object located at a distance of 1275mm. (a) the intermediate image or bl-URA at the detector, D in Fig. 1, (also refer to Fig. 6a for a better view); (b) the repeated URA pattern; (c) the result of image cross-correlation between (a) and (b); (d) the center section of (c) or a point object recovery (also refer to Fig. 3a).

For a point object, the cross correlation between the intermediate image and the repeat URA yields a periodically distributed point-like pattern shown in Fig. 2c. The period of the pattern is equal to the size of the URA. Similarly for a general object, the correlation result is a repeated object pattern with the same period. If the object (in image space) is larger than the square size in Fig. 2c, then the periodic objects will overlap. This constrains the maximum size of an object (field of view) allowed that can be faithfully imaged. Generally the linear size of URA is chosen to be about half the detector size D, then the angular field of view θ is θ = D/(2L), where L is the distance between the phase plate and the detector.

Figure 2d is the overall PSF of the coded aperture imaging system including optics and digital processing. To achieve a good image recovery, the URA used in correlation imaging should have the same orientation and size as the bl-URA at the detector plane. The recovered image quality deteriorates quickly for even a small mismatch. In this experiment, when the object is located at a distance of 1275mm, the bl-URA has a size of 6.3mm. From Fig. 3 one can observe the overall PSFs due to the mismatch of URA size to bl-URA at detector. When the URA size used for correlation processing is changed to 6.2mm, significant artifacts appeared in the combined PSF after digital processing. This can result in a poor imaging result especially for extended objects. Sometimes, resampling and interpolation of the URA are required before correlation processing. Studying the effect of interpolation on the artifacts is beyond the scope of this paper.

Fig. 3 Image recovery result with a point source located at a distance of 1275mm using URAs of different sizes in correlation processing. URA size is (a) 6.3mm; (b) 6.2mm and (c) 6mm.

Figure 4 shows the imaging results for a letter object located at a distance of 1275mm. The intermediate image in Fig. 4a is processed linearly using the same correlation method, and the recovery result is shown in Fig. 4b. In Fig. 4a we observe a general feature of the intermediate image for an extended object. The intermediate image has an overall envelop shape that is bright in the center and the intensity slowly drops to zero at the edge. There are also small scale intensity variations which reflect the details of the object. In Fig. 4a one can see regions of the CCD where detector response is low. This causes a big intensity drop (with specks in the image, some are indicated by the arrows) in comparison to the envelop. Despite this, a good recovery is still achieved as shown in Fig. 4b. In the image recovery, one can simply change these low values to bigger ones such that the envelop of the intermediate image is smooth. The exact value of these small regions has little effect on the recovery. In comparison with conventional imaging system, a small region of dead pixels in conventional lens system would cause a complete lose of image in that section.

Fig. 4 Experimental result with a letter object located at a distance of 1275mm. (a) intermediate image (the arrows indicate regions where CCD pixel response is low. All of these are not labeled); (b) image recovery result by correlation processing. The linear size of recovery is about half the size of the intermediate image.

Figure 5 shows the recovery when the object distance is changed over a range from 1275mm to 1000mm. This corresponds to defocus amounts up to 1.6λ. In the recovery the same repeated URA pattern is used for correlation processing of all the intermediate images at different distances. The depth of field of the coded aperture system is similar to that of a diffraction limited lens. i.e., the images within ±λ/4 defocus provide good quality. Interestingly, the defocused image quality degradation takes a different form in comparison to that with a conventional diffraction limited lens. For a coded aperture system, if there exists large defocus, then one observes the repetition of objects over the whole scene. The intensity of these repetitions increases as the defocus amount becomes larger.

Fig. 5 Image recovery result for the object located at different distances. The same URA pattern as shown in Fig. 2 is used for correlation processing. The object is located at a distance of (a) 1275mm; (b) 1225mm; (c) 1175mm; (d) 1100mm; (e) 1050mm; (f) 1000mm. The corresponding defocus amounts are 0, λ/4, λ/2, λ, 1.3λ and 1.6λ, respectively.

In order to understand the cause of the artifacts due to different object distances, we show in Fig. 6 the PSFs of the optical system for two object distances: 1275mm and 1100mm. The distance of 1100mm corresponds to a defocus of λ. Two differences are noticed: (i) the fine details of the PSF are different; (ii) the full sizes of the PSFs are different. The size of the PSF for a focused distance is 6.3mm, while the size of the defocused PSF is 6.42mm. (The sizes of PSFs are found by correlating the PSFs with repeated URA of different scales. The size of the URA which yields the best recovered point object is considered as the size of the PSF of the optical system.)

Fig. 6 The intermediate images for a point object located at (a) 1275mm and (b) 1100mm.

A much better recovery can be obtained for a defocused object if a correct size URA is used in recovery. This is shown in Fig. 7, where the URA with a size of 6.42mm is used to recover the object located at the defocused distance of 1100mm. We see a significant improvement of image quality.

Fig. 7 Image recovery result for object located at 1100mm using repeated URA pattern of different scales. The size of URA array is (a) 6.3mm, (b) 6.42mm.

5. Concluding remarks

In the literature there are many efforts to extend the X-ray coded aperture system to optical imaging [7

7. D. P. Casasent and T. Clark (Ed.), “Adaptive Coded Aperture Imaging and Non-imaging Sensors,” Proc. SPIE 6714 (2007).

, 8

8. D. P. Casasent and S. Rogers (Ed.), “Adaptive Coded Aperture Imaging and Non-imaging Sensors II,” Proc. SPIE 7096 (2008).

]. In our approach based on linear system theory, we have described an integrated computational imaging system in two parts: the optical system followed by a linear operator. The linear operator transforms the PSF of the optical system into a delta-like function, Eq. (2). For the illustration of the coded aperture in Eq. (5), the linear operator is a correlation operation. In correlation processing it is important to set the size of the URA the same as that of the PSF of the optical imager. Experimental results are presented in order to demonstrate the validity of the system concept in our earlier publication [1

1. W. Chi and N. George, “Phase-coded aperture for optical imaging,” Opt. Commun. 282, 2110–2117 (2009). [CrossRef]

]. In our second generation experiments, considerable imaging capability is to be demonstrated by a use of transmissive phase plate fabricated with photolithographic techniques. Good imaging is obtained using a CCD with non-uniform pixel response. The depth of field of the coded aperture system is the same as that for a diffraction limited lens. i.e., ±λ/4 defocus is acceptable.

For objects closer than the focused plane, the PSF array of the coded aperture system becomes larger. While not shown in this paper, the PSF size is smaller if the object is farther away from the plane of focus. So one can distinguish the sign of defocus for a coded aperture system. Furthermore, the defocus amount can be found accurately by correlating the intermediate image (point or extended) with repeated URA of different scales. So besides the imaging application, the coded aperture system can be used in a ranging application.

Since there is no focusing element in the system, a bright laser beam in the object space will not form a point at the detector plane, instead light is diffracted over a large area of the detector. This provides damage protection for the detector. Unlike infrared imaging with lenses, there is no specular light reflected from the detector back to the object field for this coded aperture system; hence, this makes it useful in some applications.

Acknowledgments

This research is supported in part by the Army Research Office.

References and links

1.

W. Chi and N. George, “Phase-coded aperture for optical imaging,” Opt. Commun. 282, 2110–2117 (2009). [CrossRef]

2.

R. H. Dicke, “Scatter-hole cameras for X-rays and Gamma rays,” Astrophys. J. 153, L101 (1968). [CrossRef]

3.

E. E. Fenimore and T. M. Cannon, “Coded aperture imaging with uniformly redundant array,” Appl. Opt. 17, 337–347 (1978). [CrossRef] [PubMed]

4.

R. G. Simpson and H. H. Barrett, “Coded aperture imaging,” in Imaging in Diagnositc Medicine, S. Nudel-man, (Ed.) (Plenum, 1980).

5.

F. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef] [PubMed]

6.

J. C. Dainty and F. R. Fienup, “Phase retrieval and image reconstruction for astronomy,” in Image Recovery: Theory and Application, H. Stark, (Ed.) (Academic, 1987).

7.

D. P. Casasent and T. Clark (Ed.), “Adaptive Coded Aperture Imaging and Non-imaging Sensors,” Proc. SPIE 6714 (2007).

8.

D. P. Casasent and S. Rogers (Ed.), “Adaptive Coded Aperture Imaging and Non-imaging Sensors II,” Proc. SPIE 7096 (2008).

OCIS Codes
(100.0100) Image processing : Image processing
(110.0110) Imaging systems : Imaging systems
(110.1758) Imaging systems : Computational imaging
(110.7348) Imaging systems : Wavefront encoding

ToC Category:
Imaging Systems

History
Original Manuscript: November 23, 2010
Revised Manuscript: February 11, 2011
Manuscript Accepted: February 13, 2011
Published: February 18, 2011

Citation
Wanli Chi and Nicholas George, "Optical imaging with phase-coded aperture," Opt. Express 19, 4294-4300 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-5-4294


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. W. Chi, and N. George, “Phase-coded aperture for optical imaging,” Opt. Commun. 282, 2110–2117 (2009). [CrossRef]
  2. R. H. Dicke, “Scatter-hole cameras for X-rays and Gamma rays,” Astrophys. J. 153, L101 (1968). [CrossRef]
  3. E. E. Fenimore, and T. M. Cannon, “Coded aperture imaging with uniformly redundant array,” Appl. Opt. 17, 337–347 (1978). [CrossRef] [PubMed]
  4. R. G. Simpson, and H. H. Barrett, “Coded aperture imaging,” in Imaging in Diagnositc Medicine, S. Nudel-man, (Ed.) (Plenum, New York, 1980).
  5. F. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef] [PubMed]
  6. J. C. Dainty, and F. R. Fienup, “Phase retrieval and image reconstruction for astronomy,” in Image Recovery: Theory and Application, H. Stark, (Ed.) (Academic, Florida, 1987).
  7. D. P. Casasent, and T. Clark, eds., “Adaptive Coded Aperture Imaging and Non-imaging Sensors,” Proc. SPIE 6714 (2007).
  8. D. P. Casasent, and S. Rogers, eds., “Adaptive Coded Aperture Imaging and Non-imaging Sensors II,” Proc. SPIE 7096 (2008).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited