OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 18, Iss. 14 — Jul. 5, 2010
  • pp: 15094–15103
« Show journal navigation

Single exposure super-resolution compressive imaging by double phase encoding

Yair Rivenson, Adrian Stern, and Bahram Javidi  »View Author Affiliations


Optics Express, Vol. 18, Issue 14, pp. 15094-15103 (2010)
http://dx.doi.org/10.1364/OE.18.015094


View Full Text Article

Acrobat PDF (4318 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Super-resolution is an important goal of many image acquisition systems. Here we demonstrate the possibility of achieving super-resolution with a single exposure by combining the well known optical scheme of double random phase encoding which has been traditionally used for encryption with results from the relatively new and emerging field of compressive sensing. It is shown that the proposed model can be applied for recovering images from a general image degrading model caused by both diffraction and geometrical limited resolution.

© 2010 OSA

1. Introduction

Super resolution (SR) is being considered as one of the “holy grails” of optical imaging and image processing. The endeavor to obtain high spatial resolution images with limited resolution imaging systems has raised great interest both practically and theoretically (see for example [‎1

1. S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]

7

7. A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009). [CrossRef]

]). In general, the resolution of an imaging system is limited by its optical subsystem and by its digital sensing subsystem. Optical resolution is usually limited by some optical blur mechanism, most common being the diffraction blur. Digital sensing subsystems using pixilated sensors (such as CCD or CMOS) induce resolution loss due to subsampling, according to Shannon-Nyquist sampling theorem and integration over the finite pixel fill-factor. SR techniques developed to overcome the optical subsystem spatial resolution loss are generally referred as “optical SR”[‎4] or “diffractive SR techniques” [‎5

5. A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. 40(26), 4706–4715 (2001). [CrossRef]

,‎7

7. A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009). [CrossRef]

]. Super resolution techniques designed to overcome the sampling limit of the imaging sensor, generally referred to as “geometrical SR” [‎5

5. A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. 40(26), 4706–4715 (2001). [CrossRef]

,‎7

7. A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009). [CrossRef]

] or “digital SR” [‎3

3. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004). [CrossRef] [PubMed]

] techniques, have attracted most of the attention during the last three decades [‎1

1. S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]

]. However, we point out that those methods cannot overcome concomitantly the sensor resolution loss and the optical resolution loss. The maximum achievable resolution with digital SR is inherently upper-bounded by the diffraction limited bandwidth [‎4

4. S. Prasad and X. Luo, “Support-assisted optical superresolution of low-resolution image sequences: the one-dimensional problem,” Opt. Express 17(25), 23213–23233 (2009). [CrossRef]

].

In order to overcome imaging system resolution limitation, SR techniques typically capture additional object information in some indirect way, and then perform some clever processing that extracts the high resolution data from the overall captured data. The additional information is typically captured by encoding the image in the temporal domain (e.g. by taking multiple sub-pixel shifted exposures), in the spatial domain (using multiple apertures or encoding the aperture), or in other domains such as state of polarization or spectral domain.

In this work we present a SR method that does not need any additional information acquisition. The data is captured with a single exposure without sacrificing the field of view, or requiring any other measurement dimensions. The key to achieve SR without additional data is by utilizing the fact that the information in all human intelligible images is very redundant. Therefore, instead of acquiring extra data we properly encode the data within one image and apply reconstruction algorithms that reconstruct the desired high resolution image.

Our proposed approach is to use the well-known double random phase encoding (DRPE) technique [‎8

8. P. Réfrégier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef] [PubMed]

,9

9. B. Javidi, G. Zhang, and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt. 36(5), 1054–1058 (1997). [CrossRef] [PubMed]

] as a mean of coding the scene. The acquisition-reconstruction relies on the emerging field of compressive sensing (CS). Compressive sensing theory breaks the Shannon-Nyquist sampling paradigm by utilizing the fact that the image is sparse in some arbitrary representation basis. Our approach permits both diffraction and geometrical SR. It uses a single shot acquisition process and does not sacrifice any measurement dimensions. Unlike some conventional SR systems, the approach proposed here does not require any movements.

The paper is organized as follows: in section 2, we briefly review the DRPE technique. In section, 3 we provide a short background on CS and point out the connection between the DRPE and the Gaussian random sensing, which is a universal sensing scheme. In section 4, we show how the DRPE enables the gain of SR image from a single exposure, and present simulation results. We conclude in section 5.

2. Double random phase encoding (DRPE)

Double random phase encoding (DRPE) was originally developed for optical security [‎8

8. P. Réfrégier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef] [PubMed]

]. Figure 1
Fig. 1 Block diagram of the double phase encoding process. ℑ denotes Fourier transform operator.
depicts a block diagram of the double random phase encoding process and Fig. 2
Fig. 2 DPRE Implementation of Fig. 1 using a 4f optical scheme.
shows a possible optical implementation. DRPE is based on random masks placed in the input and Fourier plane of the optical system, which whitens both the data and its Fourier spectrum. Many implementation methods were reported for the original DRPE [‎8] and similar setups, including coherent [‎9

9. B. Javidi, G. Zhang, and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt. 36(5), 1054–1058 (1997). [CrossRef] [PubMed]

–‎11

11. E. Tajahuerce, O. Matoba, S. C. Verrall, and B. Javidi, “Optoelectronic information encryption with phase-shifting interferometry,” Appl. Opt. 39(14), 2313–2320 (2000). [CrossRef]

], incoherent [‎12

12. E. Tajahuerce, J. Lancis, P. Andres, V. Climent, and B. Javidi, “Optoelectronic Information Encryption with Incoherent Light,” in Optical and Digital Techniques for Information Security, B. Javidi, ed. (Springer-Verlag, 2004).

], Fresnel [‎13

13. O. Matoba and B. Javidi, “Encrypted optical memory system using three-dimensional keys in the Fresnel domain,” Opt. Lett. 24(11), 762–764 (1999). [CrossRef]

], fractional Fourier [‎14

14. G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption by double-random phase encoding in the fractional Fourier domain,” Opt. Lett. 25(12), 887–889 (2000). [CrossRef]

] domains and other implementations (see [‎15

15. P. C. Mogensen and J. Glückstad, “Phase-only optical encryption,” Opt. Lett. 25(8), 566–568 (2000). [CrossRef]

19

19. X. Tan, O. Matoba, Y. Okada-Shudo, M. Ide, T. Shimura, and K. Kuroda, “Secure optical memory system with polarization encryption,” Appl. Opt. 40(14), 2310–2315 (2001). [CrossRef]

] to name a few). Here, we adopt compressive sensing techniques to obtain SR imaging with DRPE; therefore, we denoted the method DRPE-CS.

3. Double random phase encoder as a universal compressive sensing encoder

3.1 Brief introduction to Compressive sensing

Compressive sensing (see for example [20

20. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]

25

25. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” IEEE/OSA Journal on Display Technology, 3(3), 315–320 (2007).

]) is a relatively new sampling paradigm which seeks to sense only the “essential” features of a signal/image, i.e., CS minimizes the sampling process. This in contrast to conventional sampling paradigm, which can be summarized as sample as much as possible data, and then discard most of it using some compression method (e.g., common JPEG).

Figure 3
Fig. 3 Imaging scheme of compressed sensing [25]
shows a block diagram of the compressive imaging process [25

25. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” IEEE/OSA Journal on Display Technology, 3(3), 315–320 (2007).

]. In order to formulate mathematically the CS concept, let us consider an object t which is described by an N dimensional real valued vector (in case that the object represents an image of N pixels, t is a one dimensional vector obtained by rearranging the image in a lexicographic order) being projected (imaged) to u which is an M dimensional vector. One can also think of M as the number of detector pixels. In CS we are interested in the case of M<N, i.e., the signal is undersampled according to the Shannon-Nyquist theorem. The sensing process is given by:
u=Φt,
(2)
where Φ is an M by N matrix.

Compressive sensing relies on two principles, signal sparsity and the incoherence between the sensing and sparsifying operators [‎21

21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]

]. By assuming signal sparsity, we infer that the signal t can be sparsely represented in some arbitrary orthonormal basis Ψ (e.g., wavelet, or DCT). Thus, α is the K-sparse representation of image t projected on Ψ T, meaning, that α has only K non-zero terms.

Incoherence is a measure of dissimilarity between the sensing and sparsifying operators. The measure is mathematically quantified by (3):
μ=Nmaxij|<φi,ψj>|,
(3)
where φi,ψj denote the column vector of Φ and Ψ respectively, ,denotes regular inner product and N is the length of the column vector. The mutual coherence is bounded by 1μN [21

21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]

]. CS theory suggests that a signal (image) measured with sensing operator Φ, can be actually recovered by l 1-norm minimization. The estimated coefficients vector α is the solution of the convex optimization program [‎21

21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]

]:
α^=minαα1subjecttou=ΦΨα,
(4)
where α^1=i|αi| is the ℓ1-norm. One way of guaranteeing recovery via the ℓ1-norm minimization of a K-sparse signal t is by taking M measurements satisfying [20

20. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]

,21

21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]

]:
MCμ2(Φ,Ψ)Klog(N),
(5)
where C is some small positive constant. The role of the mutual coherence becomes clear. The larger it is, the more samples you need. One can also think of the mutual coherence as a measure of how much the projection Φ spreads the information among many entries. Thus, if every coefficient from the sparsely represented signal is spread on many projections, we may have a better chance of reconstructing the signal from less available samples. A Gaussian random sensing basis is often chosen since it is universal CS operator, meaning that it fits to signals sparse in any domain [21

21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]

]; i.e., its mutual coherence is μ2logNregardless of Ψ.

Often, when using CS, the sub-sampling is done by picking M out of N measurements uniformly at random [20

20. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]

,‎21

21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]

]. This may impose limitations on the physical realization of an image sensing system. Let us think of a CCD camera, where we have N×Npixels. Random sub-sampling means turning off many of the pixels, thus not using the entire capability of the sensor. This setting is not of much practical value unless each and every pixel is extremely expensive, which may be the case for some detectors (UV for example). On the other hand, we may use a Gaussian random projection operator, where we would not have to randomly subsample the measurements. In the case of a random Gaussian projection, we just need to take M measurements, which is more relevant to our physically constrained blurring and sampling scheme.

3.2 Double phase encoding as a universal sensing operator

Reformulating Eq. (1) in vector-matrix form reveals the similarity between DRPE operation and the Gaussian random sensing operator. The vector-matrix formulation of Eq. (1) is:
u=F*HFPt,
(6)
where P=diag(exp[j2πp(x,y)]), that is, a diagonal matrix having the first random phase mask's elements in Eq. (1) on its diagonal, H=diag(exp[j2πb(u,v)]), and F is the N 2xN 2 discrete Fourier transform matrix. The input t and output u are an N2x1 lexicographical arrangement of the input and output fields, respectively. All the matrices are of size N 2xN 2. Thus, we may write the FP in Eq. (6) representing random scrambling in the frequency domain matrix as:
FP=[11..11WN..WN(N1)......1WN(N1).WN(N1)(N1)][ej2πp10000.ej2πp2000......0...ej2πpN]==[ej2πp1ej2πp2..ej2πpNej2πp1WNej2πp2..WN(N1)ej2πpN......ej2πp1WN(N1)ej2πp2.WN(N1)(N1)ej2πpN],
(7)
where WN=ej2πN and pi is drawn from a uniform distribution on {0,1}. Please recall that all the entries of b(u,v) and p(x,y) are drawn independently from a uniform distribution between [0,1]; therefore, since E{ej2πpiej2πpj}=0, FP holds an inter-column statistical independence. In this sense the FP operator behaves equivalently to the operation of the random Gaussian sensing scheme [22

22. T. Do, T. Tran, and L. Gan, “Fast compressive sampling with structurally random matrices,” in Proc. ICASSP, 3369–3372, (2008).

]. Now, according to most CS implementations, at this stage, we should randomly sub-sample FPt, in order to guarantee the measurements independence [ 21

21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]

,22

22. T. Do, T. Tran, and L. Gan, “Fast compressive sampling with structurally random matrices,” in Proc. ICASSP, 3369–3372, (2008).

]. However, since we would like to obey the physical constraint of our optical system, which performs deterministic sampling by its nature, and we still want to de-correlate the measurements, we can perform the de-correlation using some optical implementation, and deterministically sub-sample, instead. This can be achieved by performing another random scrambling (this time in the space domain) in the same manner, by using the F*H operator in Eq. (6). Performing F*H has the de-correlating effect on the result of the FPt operator (similar to the effect FP had on t). This can also be seen as guaranteeing inter-row statistical independence. Now, a statistical independence between the measurements is guaranteed. After the second scrambling, the signal may undergo a deterministic blurring or sub-sampling, and enjoy the powerful results of CS theory such as described in section 3.

4. Super resolution with double random phase encoding

4.1 DRPE image degradation model

Let us consider an object's image with pixel size Δo. The image has size of NΔo for both x and y directions. The random phase masks have a pixel size of Δp which we assume to be as small as Δo. The phase patch, Δp, serves as element of the high resolution grid, and ultimately determines the achievable resolution. The image degradation model applied to the proposed DRPE sensing model is given by:
uL(x,y)=D(L){t(x,y)exp[i2πp(x,y)]}h(x,y)hs,
(8)
where D(L)is the decimator operator standing for picking one of every L samples. The h s operator accounts for the entire blurring caused by the optical system, including blur due to sensor's geometrical limits, small NA (diffraction limited imaging), motion blur and defocusing blur.

4.2 Geometrical sub-sampling

Geometrical sub-sampling refers to the case that the resolution limitation is caused by the digital sensors (e.g., CCD or CMOS). Let us assume for convenience that each object pixel is sub-sampled by a factor of Lx and Ly in the x and y directions, respectively, i.e. each image pixel has the size of LxΔ×LyΔ due to sensor pixelation. Consequently, the number of CCD pixels are N/L where L = L x L y. The image obtained with the DRPE system is given by:
uL(x,y)=D(L){t(x,y)exp[i2πp(x,y)]}h(x,y)hCCD,
(9)
where
hCCD=rect(xLxΔ)rect(yLyΔ)
(10)
represents the averaging (integration) over the sensor pixel area. D(L)stands for picking one sample by averaging each Lx×Ly”high resolution” pixels. For example with sub-sampling by a factor of L = 4 this operator is written in a matrix notation (for a 1-D signal) as:
uL=D(L)u=14SDu=14[1000000000000000000000001][1111000001111000001111000001111000001111]u,
(11)
where S denotes the sub-sampling matrix, and D performs the averaging (low-pass) operation. In order to reconstruct the object t from the measured u according Eq. (9) we choose to solve the problem:
min|ΨTt|1+γTV(t)s.t.u=DF1HFPt,
(12)
where TV stands for the total variation operator defined as
TV(x)=i,j(xi+1,jxi,j)2+(xi,j+1xi,j)2.
(13)
The measured signal is denoted by u, and Ψ, is the sparsifying operator. The TV functional used here is well-known in signal and image processing for its tendency to suppress spurious high-frequency features.

Figure 4
Fig. 4 (a) Original USAF 1024x1024 pixels resolution chart image, with pixel spacing ∆ (b) DRPE image using a 512x256 pixels, 2x4 pixels averaged captured image. (c) Image reconstructed from DRPE captured image (b). (d) zoom-in to (a). (e) Result of downsampling the image in (a); taking 512x256 regular samples from the original image averaged with a 2x4 pixels kernel and then up-sampled to size of 1024x1024 by bi-cubic interpolation. (f) Zoom in to (c) which is captured with DRPE-CS.
shows the simulation results with geometrically resolution limited model using USAF resolution chart. The original image has a size of 1024x1024 pixels. This was also the size of the random phase masks. The detector pixel size was taken to be 2 by 4∆, i.e. 2 and 4 times larger in the vertical and horizontal directions, respectively. Accordingly, the number of pixels in the captured image was N/LxxN/Ly, with Lx = 2 and Ly = 4. Figure 4(b) is obtained by averaging and sub-sampling the output data from the DRPE system (Fig. 1(a)) 4 times less in the horizontal direction and 2 times in the vertical direction. As a result, the CCD captures 512x256 pixels, with pixels size of 2x4∆. Fig. 4(c) shows the reconstructed image after solving Eq. (12) for the output of the DPRE-CS system. Figure 4 (d) - (f) show a zoom in on the finest resolution details, comparing the low-resolution image obtained with conventional imaging system to the super-resolution image obtained with DRPE-CS. It is evident that the DRPE-CS method resolves almost perfectly the finest details that are obviously lost with conventional imaging systems. It can be seen in Fig. 4 (e) that details which correspond to 1/4 line pairs per pixel are irresolvable, while in Fig. 4(f) the finest detail resolution corresponding to 1/2 line pairs per pixel is evident. Thus a resolution gain of at least 2 is demonstrated.

ud=D(L)[thrandom_phase]hrandom_signshCCD.
(14)

4.3 Lens blurring

In this sub-section, we consider super-resolving an image heavily degraded by diffraction. The sensing model in such a case is:
uL(x,y)={t(x,y)exp[i2πp(x,y)]}h(x,y)hDiff,
(15)
where hDiff is the blurring induced by the finite aperture size of the imaging optics. Let us define fnom as the nominal cutoff frequency of the lens aperture in order to capture the image without blurring. In such a case we have to solve the following:
min|ΨTt|1+γTV(t)s.t.u=F1HFAPt,
(16)
where A is the matrix-vector representation describing the aperture function, in the spatial frequency domain. Figure 5
Fig. 5 (a) The target in Fig. 4(a) blurred by an aperture with fcutoff=fnom/6, and additive noise yielding SNR of 37 dB. (b) Reconstruction from data captured with DRPE-CS. (c) Zoom in to the original object (Fig. 4 (a)). (d) zoom-in to (a). (e) Zoom in to (b) which is captured with DRPE-CS.
presents simulation results for a lens with a cutoff spatial radial frequency 6 times smaller than the one required for no blurring,, i.e., with a spatial frequency cutoff fcutoff=fnom/6 where fnom represents the diffraction cutoff frequency matched to the rest of the system. In our simulations, fnom is the cutoff frequency set by the object's pixel size. We also added measurement noise to the captured image such that the SNR was 37dB. The reconstruction of the fine resolution details using DRPE-CS strategy is evident from the zooming in of Fig. 5 despite of the substantial spatial frequency sub-sampling. We can notice in Fig. 5 (d) that due to blurring, targets with spatial frequency larger than 1/12 line pairs per pixels became irresolvable using conventional imaging. On the other hand, when acquiring and reconstructing data using the DRPE-CS, details corresponding to 1/2 line pairs per pixels (see Fig. 5 (e)) are clearly resolvable. Hence SR by approximately a factor of 6 is demonstrated.

4.4 General degrading model

As stated in Eq. (8), we can incorporate a more general degrading model, such as when we have both blurring caused by diffraction and sub-sampling caused by geometrical limits of the sensor (CCD, etc). Figure 6
Fig. 6 (a) Zoom in to the original object (Fig. 4 (a)). (b) Blurring by an aperture with fcutoff=fnom/5, sensor down-sampling by a 2x2 factor, and an additive measurement noise to yield 37 db SNR. (c) Reconstruction result using the DRPE-CS strategy.
shows simulation results for such a scenario. The lens has a radial cutoff frequency fcutoff=fnom/5 and the CCD causes averaging and sub-sampling by a factor of 2 in both horizontal and vertical directions. Noise was also added yielding a 37 dB SNR. In Fig. 6, we focus on the fine details and notice that the blurring has made targets having more than 1/10 line pairs per pixels become irresolvable; while the DRPE-CS was able to resolve 1/2 line pairs per pixels (Fig. 6 (c)). Thus, a 5 times resolution increase is evident, and also the ability of the DRPE-CS to resolve a general image degrading model is illustrated.

We note that the model presented by [‎23

23. J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009). [CrossRef]

] is not applicable for the diffraction limited scenario discussed in this subsection, since by applying hdiff immediately on the input signal t all the high frequencies are filtered out before entering the sensing system. This information would be lost forever and cannot be uniquely reconstructed.

5. Conclusions

We have shown that the well known double random phase encoding architecture which has been traditionally used for optical security can be successfully used for a new application, that is, super-resolution with a single exposure. The technique relies heavily on the property of the double random phase encoding that randomly spreads the data in both space and spatial frequency domains, thus mimicking a universal random Gaussian sensing operator for compressive sensing. Arguably, we can use this sensing scheme to super-resolve almost any image degradation model, caused by passing through a low pass filter linear physical system. We have demonstrated numerically super resolution reconstructions for sub-sampling caused by geometrical limits of a sensor (such as CCD array), for severe diffraction limitation, and for a combination of the two. Simulations demonstrated substantial improvements for both geometrical and optical super resolution. The sensing process works with a single exposure, which enables super resolution with real time scene acquisitions; therefore, enabling video rate implementation.

Acknowledgements

This research was partially supported by The Israel Science Foundation grant 1039/09.

References and links

1.

S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]

2.

Z. Zalevsky and D. Mendlovic, Optical Super Resolution (Springer-Verlag, 2003).

3.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004). [CrossRef] [PubMed]

4.

S. Prasad and X. Luo, “Support-assisted optical superresolution of low-resolution image sequences: the one-dimensional problem,” Opt. Express 17(25), 23213–23233 (2009). [CrossRef]

5.

A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. 40(26), 4706–4715 (2001). [CrossRef]

6.

J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005). [CrossRef] [PubMed]

7.

A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009). [CrossRef]

8.

P. Réfrégier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef] [PubMed]

9.

B. Javidi, G. Zhang, and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt. 36(5), 1054–1058 (1997). [CrossRef] [PubMed]

10.

O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009). [CrossRef]

11.

E. Tajahuerce, O. Matoba, S. C. Verrall, and B. Javidi, “Optoelectronic information encryption with phase-shifting interferometry,” Appl. Opt. 39(14), 2313–2320 (2000). [CrossRef]

12.

E. Tajahuerce, J. Lancis, P. Andres, V. Climent, and B. Javidi, “Optoelectronic Information Encryption with Incoherent Light,” in Optical and Digital Techniques for Information Security, B. Javidi, ed. (Springer-Verlag, 2004).

13.

O. Matoba and B. Javidi, “Encrypted optical memory system using three-dimensional keys in the Fresnel domain,” Opt. Lett. 24(11), 762–764 (1999). [CrossRef]

14.

G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption by double-random phase encoding in the fractional Fourier domain,” Opt. Lett. 25(12), 887–889 (2000). [CrossRef]

15.

P. C. Mogensen and J. Glückstad, “Phase-only optical encryption,” Opt. Lett. 25(8), 566–568 (2000). [CrossRef]

16.

B. M. Hennelly, T. J. Naughton, J. McDonald, J. T. Sheridan, G. Unnikrishnan, D. P. Kelly, and B. Javidi, “Spread-space spread-spectrum technique for secure multiplexing,” Opt. Lett. 32(9), 1060–1062 (2007). [CrossRef] [PubMed]

17.

O. Matoba and B. Javidi, “Encrypted optical storage with angular multiplexing,” Appl. Opt. 38(35), 7288–7293 (1999). [CrossRef]

18.

E. Tajahuerce and B. Javidi, “Encrypting three-dimensional information with digital holography,” Appl. Opt. 39(35), 6595–6601 (2000). [CrossRef]

19.

X. Tan, O. Matoba, Y. Okada-Shudo, M. Ide, T. Shimura, and K. Kuroda, “Secure optical memory system with polarization encryption,” Appl. Opt. 40(14), 2310–2315 (2001). [CrossRef]

20.

D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]

21.

E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]

22.

T. Do, T. Tran, and L. Gan, “Fast compressive sampling with structurally random matrices,” in Proc. ICASSP, 3369–3372, (2008).

23.

J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009). [CrossRef]

24.

Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” to appear in IEEE/OSA J. on Display Technology, (2010).

25.

A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” IEEE/OSA Journal on Display Technology, 3(3), 315–320 (2007).

OCIS Codes
(100.2000) Image processing : Digital image processing
(100.6640) Image processing : Superresolution

ToC Category:
Image Processing

History
Original Manuscript: April 7, 2010
Revised Manuscript: June 18, 2010
Manuscript Accepted: June 23, 2010
Published: June 30, 2010

Citation
Yair Rivenson, Adrian Stern, and Bahram Javidi, "Single exposure super-resolution compressive imaging by double phase encoding," Opt. Express 18, 15094-15103 (2010)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-14-15094


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]
  2. Z. Zalevsky and D. Mendlovic, Optical Super Resolution (Springer-Verlag, 2003).
  3. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004). [CrossRef] [PubMed]
  4. S. Prasad and X. Luo, “Support-assisted optical superresolution of low-resolution image sequences: the one-dimensional problem,” Opt. Express 17(25), 23213–23233 (2009). [CrossRef]
  5. A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. 40(26), 4706–4715 (2001). [CrossRef]
  6. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005). [CrossRef] [PubMed]
  7. A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009). [CrossRef]
  8. P. Réfrégier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef] [PubMed]
  9. B. Javidi, G. Zhang, and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt. 36(5), 1054–1058 (1997). [CrossRef] [PubMed]
  10. O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009). [CrossRef]
  11. E. Tajahuerce, O. Matoba, S. C. Verrall, and B. Javidi, “Optoelectronic information encryption with phase-shifting interferometry,” Appl. Opt. 39(14), 2313–2320 (2000). [CrossRef]
  12. E. Tajahuerce, J. Lancis, P. Andres, V. Climent, and B. Javidi, “Optoelectronic Information Encryption with Incoherent Light,” in Optical and Digital Techniques for Information Security, B. Javidi, ed. (Springer-Verlag, 2004).
  13. O. Matoba and B. Javidi, “Encrypted optical memory system using three-dimensional keys in the Fresnel domain,” Opt. Lett. 24(11), 762–764 (1999). [CrossRef]
  14. G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption by double-random phase encoding in the fractional Fourier domain,” Opt. Lett. 25(12), 887–889 (2000). [CrossRef]
  15. P. C. Mogensen and J. Glückstad, “Phase-only optical encryption,” Opt. Lett. 25(8), 566–568 (2000). [CrossRef]
  16. B. M. Hennelly, T. J. Naughton, J. McDonald, J. T. Sheridan, G. Unnikrishnan, D. P. Kelly, and B. Javidi, “Spread-space spread-spectrum technique for secure multiplexing,” Opt. Lett. 32(9), 1060–1062 (2007). [CrossRef] [PubMed]
  17. O. Matoba and B. Javidi, “Encrypted optical storage with angular multiplexing,” Appl. Opt. 38(35), 7288–7293 (1999). [CrossRef]
  18. E. Tajahuerce and B. Javidi, “Encrypting three-dimensional information with digital holography,” Appl. Opt. 39(35), 6595–6601 (2000). [CrossRef]
  19. X. Tan, O. Matoba, Y. Okada-Shudo, M. Ide, T. Shimura, and K. Kuroda, “Secure optical memory system with polarization encryption,” Appl. Opt. 40(14), 2310–2315 (2001). [CrossRef]
  20. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]
  21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]
  22. T. Do, T. Tran, and L. Gan, “Fast compressive sampling with structurally random matrices,” in Proc. ICASSP, 3369–3372, (2008).
  23. J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009). [CrossRef]
  24. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” to appear in IEEE/OSA J. on Display Technology, (2010).
  25. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” IEEE/OSA Journal on Display Technology, 3(3), 315–320 (2007).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited