## Photon counting compressive depth mapping |

Optics Express, Vol. 21, Issue 20, pp. 23822-23837 (2013)

http://dx.doi.org/10.1364/OE.21.023822

Acrobat PDF (3611 KB)

### Abstract

We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second.

© 2013 Optical Society of America

## 1. Introduction

1. M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. **40**, 10–19 (2001). [CrossRef]

5. B. Schwarz, “Mapping the world in 3D,” Nat. Photonics **4**, 429–430 (2010). [CrossRef]

6. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express **21**, 8904–8915 (2013). [CrossRef] [PubMed]

11. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**, 83–91 (2008). [CrossRef]

16. W. R. Babbitt, Z. W. Barber, and C. Renner, “Compressive laser ranging,” Opt. Lett. **36**, 4794–4796 (2011). [CrossRef] [PubMed]

## 2. Compressive sensing

### 2.1. Introduction to CS

17. D. L. Donoho, “Compressed Sensing,” IEEE Trans. Inf. Theory **52**, 1289–1306 (2006). [CrossRef]

*n*-dimensional signal

*X*from

*m*<

*n*measurements. CS exploits a signal’s compressibility to require fewer measurements than the Nyquist limit. The detection process can be modeled by interacting

*X*with a

*m*×

*n*sensing matrix

*A*such that where

*Y*is a

*m*-dimensional vector of measurements and Γ is a

*m*-dimensional noise vector. Because

*m*<

*n*, and rows of

*A*are not necessarily linearly independent, a given set of measurements does not specify a unique signal.

*X*consistent with the measurements: where

*τ*is a scaling constant between penalties. The first penalty is a least-squares term that ensures the signal matches the measured values. The second penalty,

*g*(

*X*), is a sparsity-promoting regularization function. Typical

*g*(

*X*) include the

*ℓ*

_{1}norm of

*X*or the total variation of

*X*. For a

*k*-sparse signal (only

*k*significant elements), exact reconstruction is possible for

*m*∝

*k*log(

*n/k*) measurements. In practice,

*m*is often only a few percent of

*n*. For minimum

*m*, the sensing matrix must be incoherent with the sparse basis, with the counter-intuitive result that random, binary sensing vectors work well [18

18. E. Candés and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. **23**, 969 (2007). [CrossRef]

### 2.2. Single-pixel camera

11. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**, 83–91 (2008). [CrossRef]

*A*. Elements of

*Y*are the measured intensities for corresponding patterns. An image of the scene is then recovered by an algorithm that solves Eq. (2).

19. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magn. Reson. Med **58**, 1182–1195 (2007). [CrossRef] [PubMed]

20. J. Bobin, J.-L. Starck, and R. Ottensamer, “Compressed sensing in astronomy,” IEEE J. Sel. Topics Signal Process. **2**, 718–726 (2008). [CrossRef]

21. S. T. Flammia, D. Gross, Y.-K. Liu, and J. Eisert, “Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators,” New J. Phys. **14**, 095022 (2012). [CrossRef]

22. G. A. Howland and J. C. Howell, “Efficient high-dimensional entanglement imaging with a compressive-sensing double-pixel camera,” Phys. Rev. X **3**, 011013 (2013). [CrossRef]

## 3. Compressive depth mapping

### 3.1. Adapting SPC for depth mapping

*X*(a gray-scale image) is identical to the normal single-pixel camera. Patterns from the sensing matrix are placed sequentially on the DMD. For each pattern, the number of detected photons is recorded to obtain a measurement vector

_{I}*Y*. Eq. (2) is then used to find the intensity image

_{I}*X*.

_{I}*X*from the TOF information is not straightforward because the measurements are nonlinear in

_{D}*X*. Consider photon arrivals during one pattern. Each photon has a specific TOF, but is only spatially localized to within the sensing pattern. It is possible, and likely, that multiple photons will arrive from the same location in the scene. Individual detection events therefore contain information about both intensity and depth.

_{D}*X*made up of the element-wise product

_{Q}*X*=

_{Q}*X*.

_{I}*X*. Unlike

_{D}*X*,

_{D}*X*can be linearly sampled by summing the TOF of each photon detected during the measurement. This can be seen by expanding this total TOF sum over the contribution from each pixel in the scene, such that where

_{Q}*i*is an index over sensing patterns and

*j*is an index over all DMD pixels.

*A*is the DMD status (0 or 1) of pixel

_{ij}*j*during the

*i*

^{th}pattern,

*η*is the number of photons reaching pixel

_{j}*j*during the measurement, and

*T*is the TOF of a photon arriving at pixel

_{j}*j. C*is a constant factor converting photon number to intensity and TOF to depth. Each pixel where

*A*= 1 (the DMD mirror is “on”) contributes

_{ij}*η*to the TOF sum for pattern

_{j}T_{j}*i*. This is a product of an intensity (photon number) and a depth (TOF).

*X*is therefore equal to

_{Q}*Cη*.

*T*. Because Eq. (3) takes the form of Eq. (1), Eq. (2) is suitable for recovering

*X*.

_{Q}*X*by

_{Q}*X*. Note that

_{I}*Y*and

_{I}*Y*are acquired simultaneously from the same signal;

_{Q}*Y*represents that number of photon arrivals for each pattern and

_{I}*Y*is the sum of photon TOF for each pattern. The only increase in complexity is that two optimization problems must now be solved, but these are the well understood linear problems characteristic of CS.

_{Q}### 3.2. Protocol

- Acquire measurement vectors
*Y*=_{Q}*AX*and_{Q}*Y*=_{I}*AX*, where_{I}*X*=_{Q}*X*._{I}*X*._{D} - Apply hard thresholding to
*X̄*. The subset of non-zero coefficients in a sparse representation of_{Q}*X̄*now form an over-determined, dense recovery problem._{Q} - Perform a least squares debiasing routine on non-zero coefficients of
*X̄*in the sparse representation to find their correct values and recover_{Q}*X*._{Q} - Take significant coefficients of
*X*to be identical to_{I}*X*. Perform the same least squares debiasing on these coefficients of_{Q}*X*._{I} - Recover
*X*by taking Nz(_{D}*X*)_{I}*.X*./_{Q}*X*, where_{I}*Nz*(*x*) = 1 for non-zero*x*and 0 otherwise. - (optional) In the case of very noisy measurements, perform masking on
*X*and_{D}*X*._{I}

*Y*and

_{I}*Y*as previously described. Rather than independently solve Eq. (2) for both measurement vectors, we only perform sparsity maximization on

_{Q}*Y*. In practice, solvers for Eq. (2) tend to be more effective at determining significant coefficients than finding their correct values, particularly given noisy measurements [23

_{Q}23. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE J. Sel. Topics Signal Process. **1**, 586–597 (2007). [CrossRef]

*X̄*to uniform, hard thresholding [24] in a sparse representation, which forces the majority of the coefficients to zero. The sparse basis is typically wavelets, but may be the pixel basis for simple images.

_{Q}*X*[23

_{Q}23. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE J. Sel. Topics Signal Process. **1**, 586–597 (2007). [CrossRef]

*X*are the same as for

_{I}*X*, and apply least-squares debiasing to

_{Q}*X*as well. Finally, we recover the depth map

_{I}*X*by taking where Nz(

_{D}*p*) = 1 for nonzero

*p*and 0 otherwise. This prevents a divide-by-zero situation when an element of

*X*= 0, meaning no light came from that location.

_{I}*X*and

_{I}*X*for spatial clean up.

_{D}## 4. Experimental setup

*m*sensing patterns are displayed on the DMD at 1440 Hz. For each pattern, the number of photon arrivals and the sum of photon TOF is recorded. If 1/1440 seconds is an insufficient per-pattern dwell time

*t*, the pattern sequence is repeated

_{p}*r*times so that

*t*=

_{p}*r*/1440 and the total exposure time

*t*is

*t*=

*mt*=

_{p}*mr*/1440. The average detected photon rate is 2 million counts per second, or 0.5 picowatts.

*ℓ*

_{1}norm or total variation depending on the scene and exposure time. For

*ℓ*

_{1}minimization, we use a gradient projection solver [23

23. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE J. Sel. Topics Signal Process. **1**, 586–597 (2007). [CrossRef]

25. C. Li, W. Yin, and Y. Zhang, “TVAL3: TV minimization by augmented lagrangian and alternating direction algorithms,” http://www.caam.rice.edu/~optimization/L1/TVAL3/.

*n*= 256 × 256 or

*n*= 32×32. A 256×256 reconstruction consistently took less than 5 minutes, while a 32×32 reconstruction took less than 5 seconds. The solvers and supporting scripts are implemented in Matlab, with no particular emphasis on optimizing their speed. The majority of computational resources are used for performing Fast Hadamard Transforms and reading and writing large files. If more efficient file formats are used, and the Fast Hadamard Transform is computed on a GPU or DSP, reconstruction speeds approach real-time.

### 4.1. Noise considerations

*Y*and

_{I}*Y*. The first is simply shot noise, a fundamental uncertainly in number of measured photons limiting the measured SNR to

_{Q}*η*detected photons. Because incoherent measurements used for compressive sensing use half the available light on average, the information they convey is contained in their deviation from the mean. As a general rule, a successful reconstruction requires the standard deviation of the measurement vector to exceed the shot noise. Compressive sensing has been shown to work well in the presence of shot noise in comparison to other sensing methodologies. For more information see Refs. [11

11. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**, 83–91 (2008). [CrossRef]

27. D. L. Donoho, A. Maleki, and A. Montanari, “The Noise- Sensitivity Phase Transition in Compressed Sensing”, IEEE Trans. Inf. Theory **57**, 6920 (2011). [CrossRef]

*Y*and

_{Q}*Y*. These values can be easily measured by blocking the detector and observing count rates in the absence of signal. They can then be subtracted from future measurement vectors.

_{I}^{6}signal counts per second, these negligibly effected our results and the amelioration schemes discussed were not needed. They are likely to become important in more real-world applications beyond the laboratory.

## 5. Results

### 5.1. Simple scene

*n*= 256 × 256 pixels,

*m*= 0.2

*n*= 13108, and

*t*= 40/1440 for an exposure time of 6.07 minutes. The sparsity promoter was TV. Both the intensity and depth map are accurately recovered.

_{p}*m*= 0.1

*n*= 6554 and

*t*= 5/1440 seconds for a total exposure time of 23 seconds. Figure 3(a) and 3(b) gives results for a exposure with

_{p}*m*= .07

*n*= 4558 and

*t*= 1/1440 seconds for a total exposure time of 3 seconds. The optional masking process (protocol step 7) was used. While the reconstructions are considerably noisier than the long exposure case, the objects are recognizable and the average depths are accurate.

_{p}*n*= 256 × 256 pixels would require 46 seconds, so both results in Fig. 3 are recovered faster than it is possible to raster scan. If the same dwell times were used (

*t*= 5/1440 and

_{p}*t*= 1/1440 seconds respectively), the raster scan would take about 228 seconds and 46 seconds. Note that many significant pixels in the recovered intensity images in Fig. 3 have less than 1 photon per pixel. Therefore, even the longer raster scans would struggle to produce a good picture because they cannot resolve fewer than 1 photon per pixel. Our protocol is more efficient because each incoherent projection has flux contributed from many pixels at once, measuring half the available light on average.

_{p}*n*measurements, so the minimum possible acquisition time (

*t*= 1/1440 seconds) is the same as for raster scanning, in this case

_{p}*n*×

*t*= 46 seconds. However, a basis scan using Hadamard patterns benefits from the same increased flux per measurement as the incoherent projections of CS, so SNR is improved over raster scanning. Nevertheless, CS has been shown to outperform basis scan with fewer measurements. Since the two schemes require the same hardware, CS is preferred. For a comparison of CS with other sensing techniques, including raster and basis scan, see Ref. [11

_{p}**25**, 83–91 (2008). [CrossRef]

### 5.2. Natural scene

*n*= 256 × 256 pixel measurement of a more complicated scene containing real objects is given in Fig. 4. The scene consists of a small cactus, a shoe, and a microscope placed at to-target distances of 465 cm, 540 cm, and 640 cm respectively. A photograph of the scene is given in Fig. 4(a). The scene was acquired with

*m*= 0.3

*n*a per-pattern dwell time

*t*= 50/1440 seconds for an exposure time of 11.37 minutes. The sparsity promoter was the

_{p}*ℓ*

_{1}norm in the Haar wavelet representation.

### 5.3. Depth calibration

*m*= 0.1

*n*= 102. The per-pattern dwell time was 1/1440 seconds, so each depth map was acquired in .07 seconds. The pulse length was 2 ns, or 60 cm.

### 5.4. Novelty filtering

*X*

^{(c)}and subtracting from it a previously acquired reference signal

*X*

^{(r)}to find Δ

*X*=

*X*

^{(c)}−

*X*

^{(r)}.

*X*can be directly reconstructed [28

28. V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in *Computer Vision - ECCV 2008 Lecture Notes in Computer Science* (Springer, 2008) pp. 155–168. [CrossRef]

29. O. S. Magaña-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. **102**, 231104 (2013). [CrossRef]

*X*

^{(r)}and

*X*

^{(c)}as in Eq. (1), using the same sensing matrix

*A*for both signals. Instead of separately solving Eq. (2) for each signal, first difference their measurement vectors to find

*Y*to obtain Δ

*X*without ever finding

*X*

^{(c)}or

*X*

^{(r)}. Furthermore, the requisite number of measurements

*m*depends only on the sparsity of Δ

*X*. For small changes in a very cluttered scene, the change can be much sparser than the full scene. It is often possible to recover the novelty with too few measurements to reconstruct the full scene.

*n*= 256 × 256 pixels,

*m*= 0.2

*n*, and

*t*= 40/1440 sec for an exposure time

_{p}*t*= 6.07 minutes. The sparsity promoter was TV. The difference reconstruction contains only the ‘R’ object that moved. Note that there are two copies of the ‘R’. One is negative, indicating the objects former position, and one is positive, indicating the objects new position.

*m*= 0.02

*n*= 1311 with

*t*= 40/1440 for an exposure time of 37 seconds. Masking was performed. For this acquisition time, the static clutter is very poorly imaged and is difficult to recognize in the full scene. The difference image effectively removes the clutter and gives a recognizable reconstruction of the novelty. Again, this is far faster than raster-scanning, which requires at least two 45 second scans for differencing.

_{p}### 5.5. Video and object tracking

*m*= 0.1

*n*= 99 with a per-pattern dwell time

*t*= 1/1440 sec. Each frame required .07 sec to acquire for a frame rate slightly exceeding 14 frames per second. The sparsity promoter was

_{p}*TV*. Sample frames showing the pendulum moving through a single period are given in Fig. 7, where alternate frames are skipped for presentation. We clearly recover images of the pendulum oscillating in all three dimensions. The full movie can be viewed online ( Media 1).

*x*,

*y*and

*z*values for pendulum location as a function of frame number. These can be combined to yield the three-dimensional, parametric trajectory given in Fig. 8(d). Sinusoidal fits are in good agreement with the expected values, particularly in the depth dimension.

## 6. Protocol trade-offs and limitations

*X*as an image itself. Our technique is also very natural for photon counting as per-photon TOF is easy to measure and sum. However, this ease-of-use does come with some potential trade-offs.

_{Q}13. A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express **19**, 21485–21507 (2011). [CrossRef] [PubMed]

13. A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express **19**, 21485–21507 (2011). [CrossRef] [PubMed]

*X*is treated like an image. This is reasonable because it is simply a scaling of the intensity map by the depth map; it maintains much of the structure of an image and still qualitatively appears like an image to a viewer. Because

_{Q}*X*is treated like an image, we use sparsity maximizers common for images such as wavelets and total variation. In these representations,

_{Q}*X*is likely to be more complex (less sparse) than

_{Q}*X*, so the complexity of

_{I}*X*is a limiting factor reducing the number of measurements. It is possible that other representations may be more appropriate for

_{Q}*X*. A full analysis of the complexity of

_{Q}*X*and its absolute best representation remains an open problem.

_{Q}## 7. Conclusion

6. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express **21**, 8904–8915 (2013). [CrossRef] [PubMed]

## Acknowledgments

## References and links

1. | M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. |

2. | C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS Journal of Photogrammetry and Remote Sensing |

3. | S. Hussmann and T. Liepert, “Three-dimensional TOF robot vision system,” IEEE Trans. Instrum. Meas. |

4. | S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (TOF) cameras: A survey,” IEEE Sensors J. |

5. | B. Schwarz, “Mapping the world in 3D,” Nat. Photonics |

6. | A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express |

7. | M. Richard and W. Davis, “Jigsaw: A foliage-penetrating 3D imaging laser radar system,” Lincoln Laboratory Journal |

8. | M. Vaidyanathan, S. Blask, T. Higgins, W. Clifton, D. Davidsohn, R. Carson, V. Reynolds, J. Pfannenstiel, R. Cannata, R. Marino, J. Drover, R. Hatch, D. Schue, R. Freehart, G. Rowe, J. Mooney, C. Hart, B. Stanley, J. McLaughlin, E. I. Lee, J. Berenholtz, B. Aull, J. Zayhowski, A. Vasile, P. Ramaswami, K. Ingersoll, T. Amoruso, I. Khan, W. Davis, and R. Heinrichs, “Jigsaw phase III: a miniaturized airborne 3-D imaging laser radar with photon-counting sensitivity for foliage penetration,” Proc. SPIE |

9. | M. Entwistle, M. A. Itzler, J. Chen, M. Owens, K. Patel, X. Jiang, K. Slomkowski, and S. Rangwala, “Geiger-mode APD camera system for single-photon 3D ladar imaging,” Proc. SPIE |

10. | M. A. Itzler, M. Entwistle, M. Owens, K. Patel, X. Jiang, K. Slomkowski, S. Rangwala, P. F. Zalud, T. Senko, J. Tower, and J. Ferraro, “Comparison of 32 × 128 and 32 × 32 geiger-mode APD FPAS for single photon 3D ladar imaging,” Proc. SPIE |

11. | M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. |

12. | M. Sarkis and K. Diepold, “Depth map compression via compressed sensing,” in Proceedings of 16th IEEE International Conference on Image Processing, (IEEE, 2009), pp. 737–740. |

13. | A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express |

14. | G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3D imaging,” Appl. Opt. |

15. | L. Li, L. Wu, X. Wang, and E. Dang, “Gated viewing laser imaging with compressive sensing,” Appl. Opt. |

16. | W. R. Babbitt, Z. W. Barber, and C. Renner, “Compressive laser ranging,” Opt. Lett. |

17. | D. L. Donoho, “Compressed Sensing,” IEEE Trans. Inf. Theory |

18. | E. Candés and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl. |

19. | M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magn. Reson. Med |

20. | J. Bobin, J.-L. Starck, and R. Ottensamer, “Compressed sensing in astronomy,” IEEE J. Sel. Topics Signal Process. |

21. | S. T. Flammia, D. Gross, Y.-K. Liu, and J. Eisert, “Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators,” New J. Phys. |

22. | G. A. Howland and J. C. Howell, “Efficient high-dimensional entanglement imaging with a compressive-sensing double-pixel camera,” Phys. Rev. X |

23. | M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE J. Sel. Topics Signal Process. |

24. | D. Donoho and I. Johnstone, “Threshold selection for wavelet shrinkage of noisy data,” in Proceedings of the 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, (IEEE1994), 1A24–A25 |

25. | C. Li, W. Yin, and Y. Zhang, “TVAL3: TV minimization by augmented lagrangian and alternating direction algorithms,” http://www.caam.rice.edu/~optimization/L1/TVAL3/. |

26. | Z. T. Harmony, R. F. Marcia, and R. F. Willett, “Sparse poisson intensity reconstruction algorithms” in Proceedings of the IEEE/SP 15th Workshop on Statistical Signal Processing (IEEE2009), pp 634–637. |

27. | D. L. Donoho, A. Maleki, and A. Montanari, “The Noise- Sensitivity Phase Transition in Compressed Sensing”, IEEE Trans. Inf. Theory |

28. | V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in |

29. | O. S. Magaña-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. |

**OCIS Codes**

(110.6880) Imaging systems : Three-dimensional image acquisition

(280.3640) Remote sensing and sensors : Lidar

(110.1758) Imaging systems : Computational imaging

**ToC Category:**

Imaging Systems

**History**

Original Manuscript: July 15, 2013

Revised Manuscript: September 12, 2013

Manuscript Accepted: September 14, 2013

Published: September 30, 2013

**Citation**

Gregory A. Howland, Daniel J. Lum, Matthew R. Ware, and John C. Howell, "Photon counting compressive depth mapping," Opt. Express **21**, 23822-23837 (2013)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-20-23822

Sort: Year | Journal | Reset

### References

- M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng.40, 10–19 (2001). [CrossRef]
- C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS Journal of Photogrammetry and Remote Sensing64, 1–16 (2009). [CrossRef]
- S. Hussmann and T. Liepert, “Three-dimensional TOF robot vision system,” IEEE Trans. Instrum. Meas.58, 141–146 (2009). [CrossRef]
- S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (TOF) cameras: A survey,” IEEE Sensors J.11, 1917–1926 (2011). [CrossRef]
- B. Schwarz, “Mapping the world in 3D,” Nat. Photonics4, 429–430 (2010). [CrossRef]
- A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express21, 8904–8915 (2013). [CrossRef] [PubMed]
- M. Richard and W. Davis, “Jigsaw: A foliage-penetrating 3D imaging laser radar system,” Lincoln Laboratory Journal15, 1 (2005).
- M. Vaidyanathan, S. Blask, T. Higgins, W. Clifton, D. Davidsohn, R. Carson, V. Reynolds, J. Pfannenstiel, R. Cannata, R. Marino, J. Drover, R. Hatch, D. Schue, R. Freehart, G. Rowe, J. Mooney, C. Hart, B. Stanley, J. McLaughlin, E. I. Lee, J. Berenholtz, B. Aull, J. Zayhowski, A. Vasile, P. Ramaswami, K. Ingersoll, T. Amoruso, I. Khan, W. Davis, and R. Heinrichs, “Jigsaw phase III: a miniaturized airborne 3-D imaging laser radar with photon-counting sensitivity for foliage penetration,” Proc. SPIE6550, 65500N (2007). [CrossRef]
- M. Entwistle, M. A. Itzler, J. Chen, M. Owens, K. Patel, X. Jiang, K. Slomkowski, and S. Rangwala, “Geiger-mode APD camera system for single-photon 3D ladar imaging,” Proc. SPIE8375, 83750D (2012).
- M. A. Itzler, M. Entwistle, M. Owens, K. Patel, X. Jiang, K. Slomkowski, S. Rangwala, P. F. Zalud, T. Senko, J. Tower, and J. Ferraro, “Comparison of 32 × 128 and 32 × 32 geiger-mode APD FPAS for single photon 3D ladar imaging,” Proc. SPIE8033, 80330G (2011). [CrossRef]
- M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008). [CrossRef]
- M. Sarkis and K. Diepold, “Depth map compression via compressed sensing,” in Proceedings of 16th IEEE International Conference on Image Processing, (IEEE, 2009), pp. 737–740.
- A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express19, 21485–21507 (2011). [CrossRef] [PubMed]
- G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3D imaging,” Appl. Opt.50, 5917–5920 (2011). [CrossRef] [PubMed]
- L. Li, L. Wu, X. Wang, and E. Dang, “Gated viewing laser imaging with compressive sensing,” Appl. Opt.51, 2706–2712 (2012). [CrossRef] [PubMed]
- W. R. Babbitt, Z. W. Barber, and C. Renner, “Compressive laser ranging,” Opt. Lett.36, 4794–4796 (2011). [CrossRef] [PubMed]
- D. L. Donoho, “Compressed Sensing,” IEEE Trans. Inf. Theory52, 1289–1306 (2006). [CrossRef]
- E. Candés and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl.23, 969 (2007). [CrossRef]
- M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magn. Reson. Med58, 1182–1195 (2007). [CrossRef] [PubMed]
- J. Bobin, J.-L. Starck, and R. Ottensamer, “Compressed sensing in astronomy,” IEEE J. Sel. Topics Signal Process.2, 718–726 (2008). [CrossRef]
- S. T. Flammia, D. Gross, Y.-K. Liu, and J. Eisert, “Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators,” New J. Phys.14, 095022 (2012). [CrossRef]
- G. A. Howland and J. C. Howell, “Efficient high-dimensional entanglement imaging with a compressive-sensing double-pixel camera,” Phys. Rev. X3, 011013 (2013). [CrossRef]
- M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems,” IEEE J. Sel. Topics Signal Process.1, 586–597 (2007). [CrossRef]
- D. Donoho and I. Johnstone, “Threshold selection for wavelet shrinkage of noisy data,” in Proceedings of the 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, (IEEE1994), 1A24–A25
- C. Li, W. Yin, and Y. Zhang, “TVAL3: TV minimization by augmented lagrangian and alternating direction algorithms,” http://www.caam.rice.edu/~optimization/L1/TVAL3/ .
- Z. T. Harmony, R. F. Marcia, and R. F. Willett, “Sparse poisson intensity reconstruction algorithms” in Proceedings of the IEEE/SP 15th Workshop on Statistical Signal Processing (IEEE2009), pp 634–637.
- D. L. Donoho, A. Maleki, and A. Montanari, “The Noise- Sensitivity Phase Transition in Compressed Sensing”, IEEE Trans. Inf. Theory57, 6920 (2011). [CrossRef]
- V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in Computer Vision - ECCV 2008 Lecture Notes in Computer Science (Springer, 2008) pp. 155–168. [CrossRef]
- O. S. Magaña-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett.102, 231104 (2013). [CrossRef]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.