## Reconstruction of hidden 3D shapes using diffuse reflections |

Optics Express, Vol. 20, Issue 17, pp. 19096-19108 (2012)

http://dx.doi.org/10.1364/OE.20.019096

Acrobat PDF (4045 KB)

### Abstract

We analyze multi-bounce propagation of light in an unknown hidden volume and demonstrate that the reflected light contains sufficient information to recover the 3D structure of the hidden scene. We formulate the forward and inverse theory of secondary scattering using ideas from energy front propagation and tomography. We show that using Fresnel approximation greatly simplifies this problem and the inversion can be achieved via a backpropagation process. We study the invertibility, uniqueness and choices of space-time-angle dimensions using synthetic examples. We show that a 2D streak camera can be used to discover and reconstruct hidden geometry. Using a 1D high speed time of flight camera, we show that our method can be used recover 3D shapes of objects “around the corner”.

© 2012 OSA

## 1. Introduction

### Contributions

8. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultra-fast time-of-flight imaging,” Nat. Commun. **3**, 745 (2011). [CrossRef]

- apply a SPGL1 compressive reconstruction to new experimental data.
- apply a linear inversion algorithm to real data and show that linear inversion is possible if the calibration bias of the streak camera is taken into account.
- provide results of a superior reconstruction with Cosamp from simulated data.
- provide a mathematical model that describes our reconstruction problem as a computed tomography.
- provide additional data and results for the filtered backprojection algorithm.

### Limitations and scope

## 2. Modeling propagation of a light pulse for multiple bounces

### 2.2. Scattering of a pulse

#### 2.2.1. Generating streak photos

#### 2.2.2. Hyperbolic contribution

*u*,

*v*are the two coordinates of

*w*in the receiver plane. This equation describes an ellipsoid in sender location if we fix the laser and receiver location. The ellipsoid’s parameters depend on the time at which a receiver receives an impulse. The laser spot and the receiver (on the wall) constitute the two foci of this ellipsoid. The eccentricity depends on the time when the impulse is received.

## 3. Forward model: elliptical tomographic projection

### 3.1. Elliptical tomography problem description

*W*(

*x*) =

*I*

*δ*(

_{S}*x*) is a delta function with support on the surface

*S*to be reconstructed. Apart from the 1/

*r*

^{2}factors, which typically vary slowly over the region of interest, we hence see that individual streak image pixels measure elliptical projections of the world volume

*W*(

*x*). Due to its similarity with traditional tomography, we call this problem

*elliptical tomography*. Note however that there are also key differences to traditional tomography: (i) The recorded projections live in a higher dimensional (5D) space than the world (3D). (ii) The projections are along (2D) ellipsoids instead of (1D) lines. This makes the analysis more complicated.

### 3.2. Challenges and missing cones

*only*a limited region of the unit sphere. Hence without additional priors it is not possible to reconstruct the Fourier transform of the target object in the missing directions. This is the missing cone problem well known from traditional tomography. Experimentally we get a very good resolution in the depth (orthogonal to the wall) direction, while transverse (parallel to the wall) high frequency spatial features tend to get lost.

## 4. Inverse algorithm: filtered back projection

### 4.1. Overview of the algorithm

**Phase 1: data acquisition.**We direct the laser to 60 different positions on the diffuser wall and capture the corresponding streak images. For each of the 60 positions 100 streak images are taken and overlayed to reduce noise.**Phase 2: data preprocessing.**The streak images are loaded, intensity corrected and shifted to adjust for spatiotemporal jitter.**Phase 3: 3D reconstruction.**The clean streak images are used to reconstruct the unknown shape using our backprojection-type algorithm.

#### 4.1.1. Phase 2: data preprocessing

**Timing correction.**To correct for drift in camera timing synchronization (jitter) both in space and time we direct part of the laser directly to the diffuser wall. This produces a sharp “calibration spot” in each streak image. The calibration spot is detected in each image, and the image is subsequently shifted in order to align the reference spot at the same pixel location in all streak images. The severity of the jitter is monitored in order to detect outliers or broken datasets.**Intensity correction.**To remove a common bias in the streak images we subtract a reference (background) image taken without the target object being present in the setup.**Gain correction.**We correct for non-uniform gain of the streak camera’s CCD sensor by dividing by a white light image taken beforehand.

#### 4.1.2. Phase 3: 3D reconstruction

**Voxel grid setup.**We estimate an oriented bounding box for the working volume to set up a voxel grid (see below).**Downsampling (optional).**In order to improve speed the data may be downsampled by discarding a fraction of the cameras pixels for each streak image and/or entire streak images. Experiments showed that every second camera pixel may be discarded without losing much reconstruction accuracy. When discarding entire streak images, is is important that the laser positions on the diffuser wall corresponding to the remaining images still cover a large area.**Backprojection.**For each voxel in the working volume and for each streak image, we compute the streak image pixels that the voxel under consideration might have contributed. Concretely, the voxel at location*v*can contributed to a pixel corresponding to a point*w*on the wall at time*t*if Here*C*is the camera’s center of projection and*L*is the laser position as above. Let us call the streak image pixels satisfying this condition the*contributing pixels*. We compute a function on voxel space, the*heatmap H*. For the voxel*v*under consideration we assign the value Here the sum is over all contributing pixels*p*, and*I*is the intensity measured at that pixel. The prefactor corrects for the distance attenuation, with_{p}*α*being some constant. We use*α*= 1.**Filtering.**The heatmap*H*now is a function on our 3 dimensional voxel grid. We assume that the axis of that grid are ordered such that the third axis faces away from the diffuser wall. We compute the*filtered heatmap H*as the second derivative of the heatmap along that third axis of the voxel grid. This can be done by shifting and subtracting world volume along correct axes._{f}- The filtered heatmap measures the confidence we have that at a specific voxel location there is a surface patch of the hidden object.
**Thresholding.**We compute a sliding window maximum*M*of the filtered heatmap_{loc}*H*. Typically we use a 20x20x20 voxel window with resolution around 2mm. Our estimate of the 3D shape to be reconstructed consists of those voxels that satisfy the condition where_{f}*M*= max(_{glob}*H*) is the global maximum of the filtered heatmap and_{f}*λ*,_{loc}*λ*are constants. Typically we use_{glob}*λ*= 0.45,_{loc}*λ*= 0.15_{glob}**Compressive reconstruction**- We use techniques like SPGL1, and CoSAMP [9] as an alternative to back projection and filtering. We rely on the fact that the 3D voxel grid is sparsely filled, containing surfaces which can occupy only one voxel in depth. Since SPGL1 uses only matrix vector multiplications.
9. D. Needell and J. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal.

**26**, 301–321 (2009). [CrossRef] **Rendering.**The result of thresholding step is a 3D point cloud. We use the Chimera rendering software to visualize this point cloud.

### 4.2. A remark about the filtering step

*k*|

^{2}filter in Fourier space. In image space, this amounts to taking a second derivative along each scanline. This motivates our choice of filter above. We tested both filtering by the second derivative in image and world space and found that taking the derivative approximately along the target surfaces normal yielded the best results.

### 4.3. Application of CoSAMP

## 5. Experiments

10. Hamamatsu, “Hamamatsu streak camera tutorial,” http://learn.hamamatsu.com/tutorials/java/streakcamera/.

### 5.1. Results

12. E. Pettersen, T. Goddard, C. Huang, G. Couch, D. Greenblatt, E. Meng, and T. Ferrin, “UCSF Chimera–a visualization system for exploratory research and analysis,” J. Comput. Chem. **25**, 1605–1612 (2004). [CrossRef] [PubMed]

### 5.2. Performance evaluation

## 6. Future directions

*A*above. One can obtain results for very good datasets, after changing

*A*slightly to account for intricacies of our imaging system like vignetting and gain correction. On the other hand the backprojection algorithm turned out to be quite robust to calibration errors, though they can deteriorate the obtained resolution. Since we typically have to deal with some calibration error (see section 5), we used the backprojection algorithm to obtain the real data reconstruction results of this paper.

## 7. Conclusion

## Acknowledgments

## References and links

1. | P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACM Trans. Graphics |

2. | S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in |

3. | S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graphics |

4. | A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in |

5. | R. Pandharkar, A. Velten, A. Bardagjy, M. G. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of CVPR (2011), pp. 265–272. [CrossRef] |

6. | S. Liu, T. Ng, and Y. Matsushita, “Shape from second-bounce of light transport,” in Proc. of ECCV (2010), pp. 280–293. |

7. | R. Raskar and J. Davis, “5D time-light transport matrix: What can we reason about scene properties,” MIT Technical Report (2008), http://hdl.handle.net/1721.1/67888. |

8. | A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultra-fast time-of-flight imaging,” Nat. Commun. |

9. | D. Needell and J. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal. |

10. | Hamamatsu, “Hamamatsu streak camera tutorial,” http://learn.hamamatsu.com/tutorials/java/streakcamera/. |

11. | D. Forsyth and J. Ponce, |

12. | E. Pettersen, T. Goddard, C. Huang, G. Couch, D. Greenblatt, E. Meng, and T. Ferrin, “UCSF Chimera–a visualization system for exploratory research and analysis,” J. Comput. Chem. |

13. | B. Atcheson, I. Ihrke, W. Heidrich, A. Tevs, D. Bradley, M. Magnor, and H. Seidel, “Time-resolved 3d capture of non-stationary gas flows,” ACM Trans. Graphics |

**OCIS Codes**

(110.0110) Imaging systems : Imaging systems

(110.1758) Imaging systems : Computational imaging

**ToC Category:**

Imaging Systems

**History**

Original Manuscript: March 30, 2012

Revised Manuscript: July 5, 2012

Manuscript Accepted: July 11, 2012

Published: August 6, 2012

**Virtual Issues**

Vol. 7, Iss. 10 *Virtual Journal for Biomedical Optics*

**Citation**

Otkrist Gupta, Thomas Willwacher, Andreas Velten, Ashok Veeraraghavan, and Ramesh Raskar, "Reconstruction of hidden 3D shapes using diffuse reflections," Opt. Express **20**, 19096-19108 (2012)

http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-20-17-19096

Sort: Year | Journal | Reset

### References

- P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACM Trans. Graphics24(2), 745–755 (2005). [CrossRef]
- S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proc. of IEEE ICCV (2005), Vol. 2, pp. 1440–1447.
- S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graphics25, 935–944 (2006). [CrossRef]
- A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proc. of IEEE ICCV (2009), pp. 159–166.
- R. Pandharkar, A. Velten, A. Bardagjy, M. G. Bawendi, and R. Raskar, “Estimating motion and size of moving non-line-of-sight objects in cluttered environments,” in Proc. of CVPR (2011), pp. 265–272. [CrossRef]
- S. Liu, T. Ng, and Y. Matsushita, “Shape from second-bounce of light transport,” in Proc. of ECCV (2010), pp. 280–293.
- R. Raskar and J. Davis, “5D time-light transport matrix: What can we reason about scene properties,” MIT Technical Report (2008), http://hdl.handle.net/1721.1/67888 .
- A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultra-fast time-of-flight imaging,” Nat. Commun.3, 745 (2011). [CrossRef]
- D. Needell and J. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal.26, 301–321 (2009). [CrossRef]
- Hamamatsu, “Hamamatsu streak camera tutorial,” http://learn.hamamatsu.com/tutorials/java/streakcamera/ .
- D. Forsyth and J. Ponce, Computer Vision, a Modern Approach (Prentice Hall, 2002).
- E. Pettersen, T. Goddard, C. Huang, G. Couch, D. Greenblatt, E. Meng, and T. Ferrin, “UCSF Chimera–a visualization system for exploratory research and analysis,” J. Comput. Chem.25, 1605–1612 (2004). [CrossRef] [PubMed]
- B. Atcheson, I. Ihrke, W. Heidrich, A. Tevs, D. Bradley, M. Magnor, and H. Seidel, “Time-resolved 3d capture of non-stationary gas flows,” ACM Trans. Graphics27, 1–9 (2008). [CrossRef]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.