OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 22, Iss. 6 — Mar. 24, 2014
  • pp: 7133–7144
« Show journal navigation

Adaptive compressive ghost imaging based on wavelet trees and sparse representation

Wen-Kai Yu, Ming-Fei Li, Xu-Ri Yao, Xue-Feng Liu, Ling-An Wu, and Guang-Jie Zhai  »View Author Affiliations


Optics Express, Vol. 22, Issue 6, pp. 7133-7144 (2014)
http://dx.doi.org/10.1364/OE.22.007133


View Full Text Article

Acrobat PDF (2197 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

© 2014 Optical Society of America

1. Introduction

Ghost imaging (GI) [1

1. D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74, 3600–3603 (1995). [CrossRef] [PubMed]

4

4. B. I. Erkmen and J. H. Shapiro, “Unified theory of ghost imaging with Gaussian-state light,” Phys. Rev. A 77(4), 043809 (2008). [CrossRef]

] has aroused considerable attention due to its counter-intuitive imaging mechanism. Initially, two photo-detectors were used to produce a GI image, one being a bucket detector without spatial resolution which is used to collect the light field coming from an object, the other with high spatial resolution for recording the intensity distribution of the source. The image can be retrieved merely by correlating the signals of these two detectors, but not either one alone.

Since the first CS algorithm was reported by Candès, Tao and Donoho [14

14. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]

16

16. E. J. Candès, “Compressive sampling,” in Proc. Int. Cong. Math, (European Mathematical Society, Madrid, Spain, 2006), 3, pp. 1433–1452.

], there emerged various papers that integrated CS into GI protocols in order to recover higher quality images [17

17. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]

, 18

18. W. L. Gong and S. S. Han, “Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints,” Phys. Lett. A 376(17), 1519–1522 (2012). [CrossRef]

]. But CS also has some shortcomings in that it takes much more time than the calculation of the second-order correlation, and the time consumed increases exponentially with the size of the image to be recovered, or may even fail to work under specific circumstances. Fortunately, these shortcomings can be overcome by a method named compressive adaptive computational ghost imaging (CCGI) reported by Aßmann and Bayer [19

19. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).

], similar to the work first proposed by Averbuch et al. [20

20. A. Averbuch, S. Dekel, and S. Deutsch, “Adaptive compressed image sensing using dictionaries,” SIAM J. Imaging Sci. 5(1), 57–89 (2012). [CrossRef]

] that directly uses the patterns that form the sparse basis to replace classical random patterns. In [19

19. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).

] it was demonstrated experimentally that one could take a coarse image with low resolution, perform a one-step wavelet transform, then adaptively determine regions of large coefficients and scan these areas with higher resolution. After all measurements have been done, a new finer image can be reconstructed. This method requires fewer measurements than CS but without any computational overhead, and images of any size can be retrieved. However, we find that it is not the ultimate optimal solution as the measured data can be further reduced.

In this paper, a strategy that combines all the advantages of the above but avoids their shortcomings is proposed, which we call adaptive compressive ghost imaging (ACGI). This method has four main advantages: first, very large images can be retrieved without stringent hardware restrictions, which is a huge problem for CS algorithms; second, the number of measurements can be significantly fewer than any other CGI methods [1

1. D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74, 3600–3603 (1995). [CrossRef] [PubMed]

4

4. B. I. Erkmen and J. H. Shapiro, “Unified theory of ghost imaging with Gaussian-state light,” Phys. Rev. A 77(4), 043809 (2008). [CrossRef]

, 21

21. M. F. Li, Y. R. Zhang, X. F. Liu, X. R. Yao, K. H. Luo, H. Fan, and L. A. Wu, “A double-threshold technique for fast time-correspondence imaging,” Appl. Phys. Lett. 103, 211119 (2013). [CrossRef]

]; third, the time needed to retrieve the image is reduced, compared with traditional CS algorithms, thus it will be applicable to real-time computational ghost imaging such as compressive fluorescence microscopy [22

22. V. Studer, J. Bobin, M. Chahid, H. Moussavi, E. J. Candès, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” in Proceedings of the National Academy of Sciences, (2012), 109(26), E1679–E1687. [CrossRef]

]; fourth, by absorbing the advantages of CS, the weak signals or images in the noisy environments can also be recovered with high quality. This paper is organized as follows. In Sec. 2, we describe the ACGI model, and illustrate how it works with some results both by numerical simulation and experiment. Then, some discussion of the results and prospective applications are given in Sec. 3. Finally, a brief summary is presented in Sec. 4.

2. Adaptive compressive ghost imaging model and results

The CCGI protocol adaptively finds out which parts of the image need to be scanned with lower or higher resolution, block by block or pixel by pixel, instead of taking all the data. Therefore, its main advantage is that it can retrieve the image with fewer measurements and without any computational overhead. However, point-scanning the areas of interest in the image by using a spatial light modulator (SLM) takes a very long time. Besides, the regions to be scanned in finer resolution may also have a sparse representation in some basis so that their effective information content is lower than the number of pixels, and thus a full data acquisition is unnecessary.

Aiming to combine the advantages of both CCGI and CS without their shortcomings, we modify the sampling mechanism and replace the point-scanning in each target scale with single-pixel sampling according to their adaptive thresholds. We use a digital micro-mirror device (DMD) instead of an SLM, which greatly speeds up the data acquisition rate.

To illustrate our strategy, we first define a square image [Fig. 1(a)] consisting of q × q pixels and then convert it from the space domain to the wavelet domain. The wavelet decomposition procedure [23

23. S. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. 11(7), 674–693 (1989). [CrossRef]

, 24

24. S. Mallat, A wavelet tour of signal processing, the sparse way (Elsevier, 2009), pp. 340–346.

] is given in Fig. 1(b). For the first-level wavelet decomposition, the upper right, lower left and lower right sector of the transform correspond to horizontal, vertical and diagonal edges, respectively. The upper left sub-sector also consists of four quadrants. The upper left quadrant represents a coarse version of the original image as well as the mean intensity of the picture, while the other three quadrants still contain information about horizontal, vertical and diagonal edges; this is called the second-level wavelet decomposition. When we perform a wavelet transform on a natural image that is typically compressive or has sparse representation, only a small portion of the wavelet coefficients corresponding to sharp edges are large, while the coefficients in the upper left coarse quadrant that represent the outline of the image are also large, so together they are enough to approximately recover the full image.

Fig. 1 (a) A 256 × 256 pixels image of living cells, taken from the photo gallery of Matlab. (b) Two-level wavelet transform of image (a). (c) The three-level wavelet subtree.

Fig. 2 Flowchart of adaptive compressive ghost imaging.

Different from the method described in [19

19. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).

], instead of measuring the signal x by point-scanning with an SLM, we measure the scalar product of the signal with truly random speckles:
y=Φx+e,x=Ψx,
(1)
where y is the measurement vector of dimension M, Φ is an M × N sensing matrix, e is the noise vector of dimension M, and x′ is the sparse representation coefficient. When M < N, the problem is ill-conditioned [15

15. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]

, 16

16. E. J. Candès, “Compressive sampling,” in Proc. Int. Cong. Math, (European Mathematical Society, Madrid, Spain, 2006), 3, pp. 1433–1452.

], but we notice that the signal of interest is sparse or compressive in a certain basis Ψ. For the recovery problem, many methods [7

7. M. Fornasier and H. Rauhut, “Iterative thresholding algorithms,” Appl. Comput. Harmon. Anal. 25(2), 187–208 (2008). [CrossRef]

, 8

8. N. B. Karahanoglu and H. Erdogan, “A* orthogonal matching pursuit: best-first search for compressed sensing signal recovery,” Digit. Sig. Process. 22(4), 555–568 (2012). [CrossRef]

, 29

29. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. 20(1), 33–61 (1998). [CrossRef]

, 30

30. C. B. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Master Thesis, Rice University, (2010).

] have been developed during the last few years. Usually the l1– norm serves as a measure of sparsity [16

16. E. J. Candès, “Compressive sampling,” in Proc. Int. Cong. Math, (European Mathematical Society, Madrid, Spain, 2006), 3, pp. 1433–1452.

], and by minimizing it we can obtain the optimal solution of such a problem:
minxx1subjecttoyΦΨx22<ε,
(2)
where ε is the allowed error, and lp norm is defined as xp=(i=1Nxi)1/p. The standard method to solve the l1 minimization problem is the so-called basis pursuit [29

29. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. 20(1), 33–61 (1998). [CrossRef]

]. There are also many other algorithms such as OMP, IST and l1magic, but they demand high sparsity of the signal. In most cases unfortunately, natural images only have approximately sparse representations where the number of the big coefficients is rather small, thus it is difficult for these algorithms to recover high quality images. The ACGI strategy that we use is based on the wavelet tree, similar to the wavelet basis used in conventional CS algorithms, so if we still use CS algorithms that are based on wavelet transforms, the effect of the secondary compression will not be obvious. Only by using a basis that is independent of wavelet transformation as the secondary compression can solve this problem. Total variation is a good choice. TVAL3 [30

30. C. B. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Master Thesis, Rice University, (2010).

] is a good example of a total variation method that does not require Φ * Φ′ = I and has good generality as well as noise robustness, which is very suitable for our imaging system. The solution can be approximately sparse. Given that, we choose TVAL3 as the compressive reconstruction algorithm instead of other standard algorithms to address the optimization problem of Eq. (2) and to smooth the noisy image. Here, we perform total variation on the union of interest at each level to acquire a considerable reduction ratio. In addition, we use sparse speckles to obtain better image quality.

Fig. 3 The simulation procedure for a gray-scale map. All figures are automatically gray-scale compensated. (a) 64 × 64 pixels shrunken image of Fig. 1(a). (b) Regions of large coefficients in Fig. 3(a) are adaptively searched by a one-step wavelet transform, and these areas are scanned with higher resolution in the next 128 × 128 image. (c) As in Step (b) but for Fig. 3(b) to produce the next 256 × 256 image. (d)–(f) are the reconstructed images corresponding, respectively, to 64 × 64, 128 × 128 and 256 × 256 pixels, using speckles of size 4 × 4, 2 × 2 and 1 × 1 pixels in our algorithm. (g) Wavelet transform of the final image; large wavelet coefficients are shown in white, small ones in black. (h) The result (g) converted back to a real space image using the inverse wavelet transform. For this example the total number of measurements needed is roughly 24.2% of the number of pixels of Fig. 1(a).

For a quantitative comparison of the image quality, we introduce the peak signal-to-noise ratio (PSNR) and the mean square error (MSE) as figures of merit:
PSNR=10log2552MSE
(3)
where
MSE=1sti,j=1s,t[To(i,j)T˜(i,j)]2,
(4)
To represents the original image consisting of s × t pixels, and the retrieved image. Naturally, the larger the PSNR value, the better the quality of the image recovered. The PSNR of Fig. 3(h) is 28.055 dB. The whole three-level wavelet decomposition corresponds to 64 × 64, 128 × 128 and 256 × 256 pixel square regions, respectively, so the total signal dimension is 86,016. In this case, after setting appropriate thresholds 81.000 (j = 2, bj = 24.697%) and 60.221 (j = 1, bj = 16.787%), there are (4, 096 + 4, 496 + 12, 224) = 20, 816 patterns that need to be scanned in CCGI. However, our method can perform 60%, 90% and 90% compressive sampling on each respective square region, which decreases the sampling rate by 3.847%, while bj also can be changed by choosing suitable thresholds. We can see that the total acquisition rate is roughly proportional to the area of detail or the number of nonzero high frequency wavelet coefficients.

Fig. 4 Experimental setup of adaptive compressive ghost imaging.

For simplicity but without loss of generality, only a gray scale object will be considered here. We choose a black-and-white 1951 USAF resolution test chart as the object, which is illustrated in Fig. 5(a), where the black part of the mask blocks the light and the white part transmits light. We used a photomultiplier tube (PMT) (Hamamatsu H7468-20) as the bucket (single-pixel) detector for ultra-weak light detection, which can be operated at up to 450 Hz. Although our technique is limited by this readout frequency, it is still reasonably fast. The PMT bias voltage was set at 500 V, and the integration time over the duration of each pattern was set at 1800 μs, with a dead time of 200 μs, to ensure that the light intensity transmitted did not exceed the saturation limitation of the PMT. The PMT is active only when the rising edges of the pattern triggered signals arrive. Here we also used a total 3-level wavelet transform and performed 45.117% and 22.559% finer measurements, respectively, for a series of largest wavelet coefficients on the 2 × 2 and 1 × 1 scale by setting different thresholds. For this case we had m3 = 2, 464 for the first 64 × 64 coarse image, m2 = 7, 392 for the second stage, and m1 = 14, 784 for the third step, so the total number of measurements needed was 24,640, approximately 28.646% of 86,016, and 37.598% of the number of original image pixels 65,536. Of course, we could have fixed the number of finer measurements for each scale beforehand, but in this way some image quality would have been sacrificed. Besides, the diameter D of the imaging lens we used was 25.4 mm, with a focal length f of 100 mm, and central wavelength λ of 550 nm. As is known, the minimum angle resolution θmin=d/2f=1.22λD, and Airy disk radius can be written as d2=1.22λfD, where d is the diameter. Considering the magnification β and micro-mirror size ρ, we obtain the following inequality: βd2=1.22βλfDρ. If βρD1.22λf5.178, then the resolution of the system that can be realized depends on the spatial resolution of the lens. Otherwise, it will depend on the micro-mirror size ρ. Since the DMD actually replaces the function of a traditional charge-coupled device (CCD) detector, the image should match the size of the screen to obtain optimal results. For objects that are very small, it would therefore be necessary to magnify the image first by a microscope. The reconstructed image of the test chart is given in Fig. 5(b).

Fig. 5 (a) The original target is a black-and-white 1951 USAF resolution test chart. (b) The experimental reconstruction image consisting of 256 × 256 pixels which is automatically gray-scale compensated.

As this method is adaptive compressive, we only need to load sparse binary random speckles on the DMD at the beginning of each stage, so it is no longer necessary to precompute q2(1+14++(14)L)=4q23(1(14)L) patterns or compute them on the fly for SLM point-scanning as in [19

19. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).

]. Although the area sampled seems to be enlarged, we achieve further sub-sampling on the basis of CCGI. Besides, the speckle regions can be changed by choosing optimal thresholds to obtain the best imaging quality. The exact number of measurements needed may be hard to predict as it depends on how sparse an image is in the wavelet basis, as well as the inter-relevance and further compressibility of details. Going to larger images, the advantage of this strategy will show up in the realization of high sub-sampling in real time to reconstruct large images. Thus our algorithm may provide some inspiration for actual applications that employ single-pixel detectors to record large images or data known a priori to be compressive.

3. Discussion

3.1. Analysis of results

We now discuss a more practical condition in which noise is added, as shown in Figs. 6(a)–6(i) and Figs. 7(a)–7(b).

Fig. 6 Noise-added simulation results: (a) The original image also comes from the photo gallery of Matlab. Top row: images retrieved by Aßmann’s method after adding white noise obeying a normal distribution of (b) N(0, 102), (c) N(0, 202), (d) N(0, 302), with PSNRs of 27.470, 23.516 and 20.068 dB, respectively. Bottom row: (f)–(h) are the corresponding images reconstructed by our method, with PSNRs of 27.643, 24.344 and 20.322 dB, respectively. (e) and (i) are enlarged sections of (d) and (h), respectively.
Fig. 7 The PSNRs of CCGI and ACGI vs. (a) the standard deviation of Gaussian white noise; (b) the mean Poisson noise; (c) the total acquisition rate.

For a fair comparison, we calculate the PSNR with a fixed total acquisition rate at the same noise level. It is evident that as the standard deviation of the Gaussian white noise or the mean Poisson noise increases, the PSNR of ACGI gradually becomes better than that of CCGI, which owes much to the excellent robustness of CS against noise. To demonstrate the advantages of ACGI further, the PSNRs for different total acquisition rates under the same Poisson distribution noise of mean value 40 are given in Fig. 7(c). It can be seen that the PSNR is almost linearly proportional to the total acquisition rate, and that ACGI is able to retrieve an image of better quality than CCGI but with much less data manipulation and sampling time. From the above analysis, we can regard our method as also being more suitable for measuring images containing strong noise or weak signals. It is interesting that the PSNR curves all exhibit some slight oscillations. To our knowledge, this is partly due to the randomness of the noise fluctuations and partly because as a consequence the retrieved images at each level will change as well.

Furthermore, we would like to have a comparison of ACGI with standard CS algorithms which reconstruct the full image, as shown in Fig. 8. For all the four figures the same sampling rate is 25% of the original 65,536 image pixels. Fig. 8(a) shows the CCGI result, while Figs. 8(b)–8(d) show the images recovered by RecPF [32

32. J. Yang, Y. Zhang, and W. Yin, “A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data,” IEEE J. Sel. Top. Signal Processing 4(2), 288–297 (2010). [CrossRef]

], TVAL3 and SSMP [33

33. R. Berinde and P. Indyk, “Sequential sparse matching pursuit,” in Proc. 47th Annu. Allerton Conf. Commun. Control Comput., (2009), 36–43.

] software toolkits, respectively. As there are so many algorithms and approaches, most of them (e.g., OMP, BP, IST, SpaRSA) have huge computational overhead when dealing with large images, while the time consumed increases horrendously with image size, taking perhaps several hours or more. Even if the large images are finally reconstructed, the results are far from satisfactory, so it is not necessary to make a comparison with these algorithms. A few optimization algorithms have appeared in recent years which can deal with large objects under some specific conditions, such as RecPF and SSMP. The former is based on partial Fourier data while the latter can decrease storage space and computation time by using sparse random matrices. Figs. 8(c)–8(d) all used sparse random speckles and can reconstruct the full image in a relatively short time. Our ACGI also benefits from this great merit of sparse random speckles. From the figures of Fig. 8, we see that although ACGI gives a slightly lower quality image than RecPF, but it is better than the other two while taking up a much shorter computation time than all of them. Actually, ACGI is an intermediate approach of computational ghost imaging, benefiting from both CCGI and CS.

Fig. 8 Comparison between ACGI and other standard CS techniques which reconstruct the full 256 × 256 pixel image. (a)–(d): Images recovered by ACGI, RecPF, TVAL3 and SSMP, with a PSNR of 30.882, 35.663, 28.270 and 21.781 dB, and core computation times used for reconstruction of 3.024, 11.900, 25.217 and 702.618 s, respectively.

3.2. Potential applications

Our new method offers a general approach applicable to all traditional GI based on thermal light. Here we present some simulation results by which we can verify its validity and feasibility in potential applications, such as time-resolved and multi-wavelength applications of correlation imaging.

Fig. 9 Simulation results of time-resolved 256 × 256 pixel images: (a)–(d) Original time-resolved images; (e)–(h) Corresponding reconstructed images with a PSNR of 25.730, 26.268, 24.228 and 26.243 dB, respectively.

In order to investigate the feasibility of color ACGI, we took a 256 × 256 pixel color picture of some flowers photographed by ourselves. The ACGI simulated results are shown in Fig. 10. The original image [Fig. 10(a)] is split into R (red), G (green) and B (blue) planes to simulate the effect of RGB color filters, as shown in Figs. 10(b)–10(d). By overlaying the R, G and B adaptive compressive ghost images as shown in Figs. 10(e)–10(g), we obtain a colored ghost image, as shown in Fig. 10(h). These results show that a multi-wavelength composite image can be reconstructed clearly with 255 tones without any color distortion.

Fig. 10 Simulation procedure for color images. (a) A 256 × 256 pixel photo taken by ourselves. (b)–(d) correspond, respectively, to the red, green and blue components of (a). (e)–(g) are the corresponding ACGI recovered images, with a PSNR of 32.812, 32.633 and 33.185 dB, respectively. (h) is the color image retrieved by synthesizing the three components (e)–(g).

For the future, we will continue to experimentally realize the adaptive compressive spectral imaging with a dispersion grating and by collecting the signals containing visible and near-infrared wavelength information, for applications such as fluorescence imaging. A more extended experimental setup and quantitative analysis will be presented in a future paper.

4. Summary

In this work, we have proposed and demonstrated a new method named adaptive compressive ghost imaging which combines the advantages of both adaptive computational ghost imaging and compressed sensing protocols. Both numerical simulations and experimental realizations have been used to demonstrate its exceptional features. First, very large images can be retrieved without stringent hardware restrictions; second, the number of measurements is much less than for CGI or pure CS methods; third, the computational time is greatly reduced, compared with conventional CS algorithms; fourth, it is robust against noise. It is therefore a far better choice for retrieving high quality images from ultra-weak or noisy signals. Furthermore, this technique can be used to improve all computational ghost imaging or single-pixel imaging protocols where the image has sparse representation in the wavelet basis, and can be extended to imaging applications at any wavelength, including non-visible wavebands.

Acknowledgments

This work was supported by the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2013YQ030595), the National High Technology Research and Development Program of China (Grant No. 2011AA120102), the State Key Development Program for Basic Research of China (Grant No. 2010CB922904), and the National Natural Science Foundation of China (Grant Nos. 61274024 and 11375224).

References and links

1.

D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74, 3600–3603 (1995). [CrossRef] [PubMed]

2.

A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. 93, 093602 (2004). [CrossRef] [PubMed]

3.

D. Zhang, Y. H. Zhai, L. A. Wu, and X. H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30(18), 2354–2356 (2005). [CrossRef] [PubMed]

4.

B. I. Erkmen and J. H. Shapiro, “Unified theory of ghost imaging with Gaussian-state light,” Phys. Rev. A 77(4), 043809 (2008). [CrossRef]

5.

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]

6.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]

7.

M. Fornasier and H. Rauhut, “Iterative thresholding algorithms,” Appl. Comput. Harmon. Anal. 25(2), 187–208 (2008). [CrossRef]

8.

N. B. Karahanoglu and H. Erdogan, “A* orthogonal matching pursuit: best-first search for compressed sensing signal recovery,” Digit. Sig. Process. 22(4), 555–568 (2012). [CrossRef]

9.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Proc. Mag. 25(2), 83–91 (2008). [CrossRef]

10.

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008). [CrossRef]

11.

P. Sen and S. Darabi, “Compressive dual photography,” Computer Graphics Forum 28(2), 609–618 (2009). [CrossRef]

12.

S. Li, X. R. Yao, W. K. Yu, L. A. Wu, and G. J. Zhai, “High-speed secure key distribution over an optical network based on computational correlation imaging,” Opt. Lett. 38(12), 2144–2146 (2013). [CrossRef] [PubMed]

13.

W. K. Yu, S. Li, X. R. Yao, X. F. Liu, L. A. Wu, and G. J. Zhai, “Protocol based on compressed sensing for high-speed authentication and cryptographic key distribution over a multiparty optical network,” Appl. Opt. 52(33), 7882–7888 (2013). [CrossRef]

14.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]

15.

D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]

16.

E. J. Candès, “Compressive sampling,” in Proc. Int. Cong. Math, (European Mathematical Society, Madrid, Spain, 2006), 3, pp. 1433–1452.

17.

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]

18.

W. L. Gong and S. S. Han, “Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints,” Phys. Lett. A 376(17), 1519–1522 (2012). [CrossRef]

19.

M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).

20.

A. Averbuch, S. Dekel, and S. Deutsch, “Adaptive compressed image sensing using dictionaries,” SIAM J. Imaging Sci. 5(1), 57–89 (2012). [CrossRef]

21.

M. F. Li, Y. R. Zhang, X. F. Liu, X. R. Yao, K. H. Luo, H. Fan, and L. A. Wu, “A double-threshold technique for fast time-correspondence imaging,” Appl. Phys. Lett. 103, 211119 (2013). [CrossRef]

22.

V. Studer, J. Bobin, M. Chahid, H. Moussavi, E. J. Candès, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” in Proceedings of the National Academy of Sciences, (2012), 109(26), E1679–E1687. [CrossRef]

23.

S. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. 11(7), 674–693 (1989). [CrossRef]

24.

S. Mallat, A wavelet tour of signal processing, the sparse way (Elsevier, 2009), pp. 340–346.

25.

J. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients,” IEEE Trans. Signal Proces. 41(12), 3445–3462 (1993). [CrossRef]

26.

A. Said and W. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Trans. Circ. Syst. Video Technol. 6(3), 243–250 (1996). [CrossRef]

27.

S. G. Chang, B. Yu, and M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Trans. Image Process. 9(9), 1532–1546 (2000). [CrossRef]

28.

J. Haupt, R. Nowak, and R. Castro, “Adaptive sensing for sparse signal recovery,” in Proceedings of the 2009 IEEE Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, (Marco Island, FL, Jan., 2009), 702–707.

29.

S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. 20(1), 33–61 (1998). [CrossRef]

30.

C. B. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Master Thesis, Rice University, (2010).

31.

Texas Instruments, “DLP discovery 4100 chipset data sheet (Rev. A),” (2013), "http://www.ti.com/lit/er/dlpu008a/dlpu008a.pdf""”.

32.

J. Yang, Y. Zhang, and W. Yin, “A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data,” IEEE J. Sel. Top. Signal Processing 4(2), 288–297 (2010). [CrossRef]

33.

R. Berinde and P. Indyk, “Sequential sparse matching pursuit,” in Proc. 47th Annu. Allerton Conf. Commun. Control Comput., (2009), 36–43.

OCIS Codes
(110.2990) Imaging systems : Image formation theory
(200.4740) Optics in computing : Optical processing
(110.1085) Imaging systems : Adaptive imaging
(110.3010) Imaging systems : Image reconstruction techniques

ToC Category:
Imaging Systems

History
Original Manuscript: December 27, 2013
Revised Manuscript: February 25, 2014
Manuscript Accepted: March 6, 2014
Published: March 19, 2014

Citation
Wen-Kai Yu, Ming-Fei Li, Xu-Ri Yao, Xue-Feng Liu, Ling-An Wu, and Guang-Jie Zhai, "Adaptive compressive ghost imaging based on wavelet trees and sparse representation," Opt. Express 22, 7133-7144 (2014)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-22-6-7133


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, Y. H. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74, 3600–3603 (1995). [CrossRef] [PubMed]
  2. A. Gatti, E. Brambilla, M. Bache, L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. 93, 093602 (2004). [CrossRef] [PubMed]
  3. D. Zhang, Y. H. Zhai, L. A. Wu, X. H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30(18), 2354–2356 (2005). [CrossRef] [PubMed]
  4. B. I. Erkmen, J. H. Shapiro, “Unified theory of ghost imaging with Gaussian-state light,” Phys. Rev. A 77(4), 043809 (2008). [CrossRef]
  5. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]
  6. Y. Bromberg, O. Katz, Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]
  7. M. Fornasier, H. Rauhut, “Iterative thresholding algorithms,” Appl. Comput. Harmon. Anal. 25(2), 187–208 (2008). [CrossRef]
  8. N. B. Karahanoglu, H. Erdogan, “A* orthogonal matching pursuit: best-first search for compressed sensing signal recovery,” Digit. Sig. Process. 22(4), 555–568 (2012). [CrossRef]
  9. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Proc. Mag. 25(2), 83–91 (2008). [CrossRef]
  10. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008). [CrossRef]
  11. P. Sen, S. Darabi, “Compressive dual photography,” Computer Graphics Forum 28(2), 609–618 (2009). [CrossRef]
  12. S. Li, X. R. Yao, W. K. Yu, L. A. Wu, G. J. Zhai, “High-speed secure key distribution over an optical network based on computational correlation imaging,” Opt. Lett. 38(12), 2144–2146 (2013). [CrossRef] [PubMed]
  13. W. K. Yu, S. Li, X. R. Yao, X. F. Liu, L. A. Wu, G. J. Zhai, “Protocol based on compressed sensing for high-speed authentication and cryptographic key distribution over a multiparty optical network,” Appl. Opt. 52(33), 7882–7888 (2013). [CrossRef]
  14. E. J. Candès, J. Romberg, T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]
  15. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]
  16. E. J. Candès, “Compressive sampling,” in Proc. Int. Cong. Math, (European Mathematical Society, Madrid, Spain, 2006), 3, pp. 1433–1452.
  17. O. Katz, Y. Bromberg, Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]
  18. W. L. Gong, S. S. Han, “Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints,” Phys. Lett. A 376(17), 1519–1522 (2012). [CrossRef]
  19. M. Aßmann, M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3, 1545 (2013).
  20. A. Averbuch, S. Dekel, S. Deutsch, “Adaptive compressed image sensing using dictionaries,” SIAM J. Imaging Sci. 5(1), 57–89 (2012). [CrossRef]
  21. M. F. Li, Y. R. Zhang, X. F. Liu, X. R. Yao, K. H. Luo, H. Fan, L. A. Wu, “A double-threshold technique for fast time-correspondence imaging,” Appl. Phys. Lett. 103, 211119 (2013). [CrossRef]
  22. V. Studer, J. Bobin, M. Chahid, H. Moussavi, E. J. Candès, M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” in Proceedings of the National Academy of Sciences, (2012), 109(26), E1679–E1687. [CrossRef]
  23. S. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. 11(7), 674–693 (1989). [CrossRef]
  24. S. Mallat, A wavelet tour of signal processing, the sparse way (Elsevier, 2009), pp. 340–346.
  25. J. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients,” IEEE Trans. Signal Proces. 41(12), 3445–3462 (1993). [CrossRef]
  26. A. Said, W. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Trans. Circ. Syst. Video Technol. 6(3), 243–250 (1996). [CrossRef]
  27. S. G. Chang, B. Yu, M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Trans. Image Process. 9(9), 1532–1546 (2000). [CrossRef]
  28. J. Haupt, R. Nowak, R. Castro, “Adaptive sensing for sparse signal recovery,” in Proceedings of the 2009 IEEE Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, (Marco Island, FL, Jan., 2009), 702–707.
  29. S. S. Chen, D. L. Donoho, M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. 20(1), 33–61 (1998). [CrossRef]
  30. C. B. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Master Thesis, Rice University, (2010).
  31. Texas Instruments, “DLP discovery 4100 chipset data sheet (Rev. A),” (2013), "http://www.ti.com/lit/er/dlpu008a/dlpu008a.pdf"" ”.
  32. J. Yang, Y. Zhang, W. Yin, “A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data,” IEEE J. Sel. Top. Signal Processing 4(2), 288–297 (2010). [CrossRef]
  33. R. Berinde, P. Indyk, “Sequential sparse matching pursuit,” in Proc. 47th Annu. Allerton Conf. Commun. Control Comput., (2009), 36–43.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited