OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 4 — Feb. 25, 2013
  • pp: 5086–5098
« Show journal navigation

Single-photon pulsed-light indirect time-of-flight 3D ranging

S. Bellisai, D. Bronzi, F. A. Villa, S. Tisa, A. Tosi, and F. Zappa  »View Author Affiliations


Optics Express, Vol. 21, Issue 4, pp. 5086-5098 (2013)
http://dx.doi.org/10.1364/OE.21.005086


View Full Text Article

Acrobat PDF (1693 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

“Indirect” time-of-flight is one technique to obtain depth-resolved images through active illumination that is becoming more popular in the recent years. Several methods and light timing patterns are used nowadays, aimed at improving measurement precision with smarter algorithms, while using less and less light power. Purpose of this work is to present an indirect time-of-flight imaging camera based on pulsed-light active illumination and a 32 × 32 single-photon avalanche diode array with an improved illumination timing pattern, able to increase depth resolution and to reach single-photon level sensitivity.

© 2013 OSA

1. Introduction

Nowadays imaging acquisition systems for 3D scene reconstruction are required in an increasing number of applications, for example indoor and outdoor safety and security monitoring, long-distance LIDAR ranging, safety in automotive environments, robotics and architecture [1

1. A. Leone, G. Diraco, and P. Siciliano, “Detecting falls with 3D range camera in ambient assisted living applications: A preliminary study,” Med. Eng. Phys. 33(6), 770–781 (2011). [CrossRef] [PubMed]

7

7. F. Rinaudo, F. Chiabrando, F. Nex, and D. Piatti, “New instruments and technologies for cultural heritage survey: full integration between point clouds and digital photogrammetry,” Lect. Notes Comput. Sci. 6436, 56–70 (2010). [CrossRef]

]. Several methods can be used to reconstruct the objects’ distances in a scene, for example stereography [8

8. N. Cottini, M. De Nicola, M. Gottardi, and R. Manduchi, “A low-power stereo vision system based on a custom CMOS imager with positional data coding,” 2011 7th Conference on Ph.D. Research in Microelectronics and Electronics (PRIME) (2011), pp. 161–164.

], structured light [9

9. Y. Sooyeong, S. Jinho, H. Youngjin, and D. Hwang, “Active ranging system based on structured laser light image,” Proceedings of SICE Annual Conference 2010 (2010), pp. 747–752.

] or time-of-flight measurements. Systems based on active illumination of the scene usually employ a light source, which emits photons towards the scene, and a single solid-state imager, which detects the back-reflected light. By measuring the Time-of-Flight (TOF) between light emission and reflected signal detection, it is possible to compute the distance between an object and the camera, through the speed of light. In order to acquire a 3D depth-resolved image of the scene, it is possible to measure the TOF information pixel-by-pixel by means of an array of smart pixels.

The various techniques currently used to investigate the TOF [10

10. D. Stoppa and A. Simoni, “Single-photon detectors for time-of-flight range imaging,” in Single-Photon Imaging, 1st ed, P. Seitz and A. J. P. Theuwissen, eds. (Springer, Berlin, 2011), pp. 275–300.

] can be grouped in direct (dTOF) and indirect (iTOF) ones. The former ones directly measure the time delay by means of a very accurate timer or a Time-to-Digital (TDC) [11

11. B. Markovic, S. Tisa, F.A. Villa, A. Tosi, and F. Zappa, “A high-linearity, 17 ps precision time-to-digital coverter based on a single-stage delay Vernier loop fine interpolation,” IEEE Trans. Circuits . Syst. I. Reg. Pap. 99, 1-13 (2013).

] or Time-to-Amplitude (TAC) [12

12. M. Crotti, I. Rech, and M. Ghioni, “Four channel, 40 ps resolution, fully integrated time-to-amplitude converter for time-resolved photon counting,” IEEE J. Solid-state Circuits 47(3), 699–708 (2012). [CrossRef]

] converters. Instead the latter ones reconstruct the time delay (hence the distance) from the measurement of the phase delay of the reflected signal compared to the periodic emitted light excitation. The dTOF approach requires specific smart pixels and array sensors able to measure the time elapsed from the emission of a very sharp (in the picosecond range) optical pulse to its detection. Usually that method is used for long (kilometers) distances and for very high precision (millimeter or even shorter) distance measurements [2

2. N. J. Krichel, A. McCarthy, A. M. Wallace, J. Ye, and G. S. Buller, “Long-range depth imaging using time-correlated single-photon counting,” Proc. SPIE 7780, 77801I, 77801I-12 (2010). [CrossRef]

,13

13. J. S. Massa, G. S. Buller, A. C. Walker, S. Cova, M. Umasuthan, and A. M. Wallace, “Time-of-flight optical ranging system based on time-correlated single-photon counting,” Appl. Opt. 37(31), 7298–7304 (1998). [CrossRef] [PubMed]

] not to mention LIDAR applications [3

3. J. R. Bruzzi, K. Strohbehn, B. G. Boone, S. Kerem, R. S. Layman, and M. W. Noble, “A compact laser altimeter for spacecraft landing applications,” Johns Hopkins APL Tech. Dig. 30, 331–345 (2012).

]. Instead iTOF ranging can be implemented modifying standard 2D imaging sensors with the addition of a demodulation stage, and is mainly aimed at short to medium distances (tens of meters) and with depth resolutions of some centimeters [14

14. R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001). [CrossRef]

].

In a first iTOF technique, known as continuous-wave iTOF (cw-iTOF), a sinusoid modulated light shines the scene and the return signal is sampled few times during the modulation period, in order to compute the phase delay, hence the camera-object distance [14

14. R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001). [CrossRef]

]. In a second iTOF technique, based on square pulses of light and called pulsed-light iTOF (p-iTOF), the return signal is integrated within a well-defined time slot, within the period of the signal. This latter technique is also known as Multiple Double Short-time Integration (MDSI) [15

15. M. L. Hafiane, W. Wagner, Z. Dibi, and O. Manck, “Analysis and estimation of NEP and DR in CMOS TOF-3D image sensor based on MDSI,” Sens. Actuators A Phys. 169(1), 66–73 (2011). [CrossRef]

].

Two detection approaches could be exploited: either “analog” detection of the photocurrent or “digital” photon-counting of individual single photons. In principle the analog approach could be single-shot, but it requires very high peak power laser excitation and significant efforts for reducing electronic (Johnson and 1/f) noise contributions. Instead, the digital approach requires a much lower peak power excitation and is definitely insensitive to electronics noise of any kind, being able to provide a pulse every time a single photon is detected, but it requires the integration of many excitation pulses, in order to ascertain that such a photon comes from the excitation and not from the background, hence improving precision.

2. Pulsed-light indirect time-of-flight

In the p-iTOF approach, two techniques can be employed, either the long-shutter (LST) or the double sampling (DST) one [15

15. M. L. Hafiane, W. Wagner, Z. Dibi, and O. Manck, “Analysis and estimation of NEP and DR in CMOS TOF-3D image sensor based on MDSI,” Sens. Actuators A Phys. 169(1), 66–73 (2011). [CrossRef]

]. The first approach is depicted in Fig. 1
Fig. 1 Long-shutter p-iTOF measurement principle, with light excitation pulses, integration time windows (synchronized to the excitation pulse and with durations twice the maximum round trip) and reflected light (with the delayed reflected pulsed signal and flat background).
. The pulsed light duration Δtmax is set equal to the maximum expected one-way trip, i.e. the time took by a photon to reach the furthest object in the scene at the distance D and being reflected back to the sensor. A first integration window W0 is synchronously enabled, with a duration of 2·Δtmax. While the light pulse travels towards the scene and backward to the detector, the detector collects the signal Q0, given by background light (during the first duration ΔtTOF shown in Fig. 1) and then also the reflected light signal.

In order to assess the distance of the object, a second integration time-window W1 (Fig. 1, center) is enabled in advance by ΔtMAX with respect to the light pulse. In this way the corresponding sample Q1 (Fig. 1, bottom) provides a “useful” information about the object distance, since it contains just a portion of the (and not the whole) reflected signal. Since the amount of back reflected light depends on the distance of the object in the scene, but also on its reflectivity, it is compulsory to normalize the signal Q1 over the signal Q0. However, during W0 and W1 the sensor acquires also a background signal from the scene (e.g. ambient light, stray light, and also detector dark-counting). Therefore a third integration window Wb is required to accumulate just the background intensity (Qb), with no light signal therein. Such a quantity must be subtracted from both Q0 and Q1, before the normalization, in order to compute the distance as
d=D[(Q0Qb)(Q1Qb)(Q0Qb)]=D(Q0Q1Q0Qb)
(1)
where D is the maximum distance under investigation, given by D = c·Δtmax, proportional to the speed of light, c, and the pulse duration Δtmax.

If the detection were analog, the quantities Q0, Q1 and Qb could be either charge, current or voltage levels. Hence one single pulse excitation could be enough to compute the distance, if electronic noise (and background fluctuations) were negligible. Instead, because of the “digital” nature of photon-counting sensors, only one single photon per integration window can be counted. Therefore for accumulating enough photons to improve statistics, the number of frames, i.e. the excitation pulses and the integration windows, must be repeated for a sufficient number of cycles. An example of the repetitive excitation and acquisition is shown in Fig. 2
Fig. 2 Long shutter iTOF measurement scheme, where the photon signal is counted in three different time-slots, Q0, Q1 and Qb. Note that the single photon detector can signal the detection of one and just the first one photon detected during the corresponding integration window. In a first phase, only W0 windows are enabled, then only W1 windows and finally only Wb windows.
, where photons are added in subsequent time windows W0, W1 and Wb, which are time-multiplexed in three consecutive frames. As another example Fig. 2 shows a different timing of the time windows W0, W1 and Wb, which are cyclical multiplexed within every frame (with the repetitive excitation and acquisition shown in Fig. 3
Fig. 3 Long shutter (LST) iTOF measurement scheme, as in Fig. 2, but now the integration windows are enabled cyclically, i.e. W0, then W1, then Wb and then again W0, and so on.
).

The computed distance d is sensitive to the statistical fluctuations of variables Q0, Q1 and Qb. Therefore the variance of the distance measurements d(Q0, Q1, Qb) due to the photon statistics, i.e. the uncertainty in the depth assessment, is given by

(σdd¯)2=(σQ0Q1Q¯00Q¯1)2+(σQ0QbQ¯0Q¯b)22σQ02(Q¯0Q¯1)(Q¯0Q¯b).
(2)

By assuming a Poisson statistics for photons, hence σQb2=Q¯b, σQ02=Q¯0 and σQ12=Q¯1, the variance of the distance computation is
σd2=D2(Q0¯Qb¯)2[(Q0¯Qb¯)(23d¯D+(d¯D)2)+Qb¯(22d¯D+2(d¯D)2)]
(3)
as a function of the maximum depth range D and the estimated average distance d¯.

σd2=D2(Q2¯+Q3¯2Qb¯)2[(Q2¯+Q3¯2Qb¯)·(d¯D(d¯D)2)+Qb¯·(13d¯D+3(d¯D)2)]
(7)

Consider simple conditions for DST: in case of negligible background, the variance becomes zero both at d = 0 and at d = D, while it shows a maximum of σd2 = ¼·D2/(Q2 + Q3) at d = D/2: in case of a depth range D = 20 m and an average signal level Q2 + Q3 = 1,000 photons, the uncertainty will be σd = 32 cmrms (see also the followings, when discussing Fig. 5
Fig. 5 Simulations for LST (left) and DST (right) methods of computed vs. real distance (top) at each distance and rms standard deviation (bottom). 100 trials have been considered, with an average of 1,000 signal photons per integration window and no background.
). Instead in the same condition, the LST approach would achieve a precision equal to just σd = 55 cmrms. The difference between the two methods lies on a form of correlation in the time-windows integrations. In the LST case the total amount of photons of the whole reflected pulse is contained within the sample Q0 while the useful distance information is provided only by Q1; hence the sample Q0 is completely uncorrelated by the sample Q1. Instead, the distance information with the DST method is contained within Q3 (which is equivalent to Q1) while the total pulse is contained in the sum of Q2 and Q3. Hence, in this way, the correlation of the whole pulse integration with the sliced integration leads to a better precision for DST with respect to LST.

3. Simulations

In order to validate the analytical model derived in Section 2 for both LST and DST, we performed Monte-Carlo simulations, modeling the Poisson statistics for both background and reflected signal, in order to assess and compare performances at different operating conditions. In actual measurements, the intensity of the reflected signal that eventually reaches the detector depends not only on the distance of the object but also on geometrical characteristics of the objects in the scene (reflectivity of objects, quality of the surfaces, slope of the sides) and of the optics (field of view, performances of lens). In order to remove these dependences and perform a fair comparison of the two methods, our simulations were parameterized at the detector side, i.e. we considered the net signal eventually reaching the detector, for both signal and background.

We divided the entire depth range D into 1,000 steps and we simulated the corresponding 1,000 real distances, with both methods. For every single distance d, we performed 100 trials. In each trial we generated five uncorrelated random numbers, with Poisson distribution, for background Qb, “total” signal Q0, and “sliced” signals Q1, Q2, Q3 (related to the set distance d). Then for each trial we computed the distance according to Eq. (1) and Eq. (6) and we obtained two distributions of 100 values each, for LST and DST approaches. Finally we computed the variances of those two distributions and we compared them to those of Eq. (5) and Eq. (10).

The simulation results shown in Fig. 5 are based on an average of 1,000 signal photons per frame and with no background photons. The plots at the top of the figure show 100 trials of the computed distance d vs. the real one; the plots at the bottom show the standard deviation for those 100 trials along all the 1,000 simulated distances. The good matching between analytical models and Monte Carlo simulations confirms that the statistical spread is larger for the LST (on the left) approach compared to the DST (on the right) method, as expected. At the same distance of 10 m, the precision is around 6% for the LST method and 3% for the DST one.

In Fig. 6
Fig. 6 Simulations for LST (left) and DST (right) methods of computed vs. real distance (top) at each distance and rms standard deviation (bottom). 100 trials have been considered, with an average of 100,000 signal photons and 100,000 background photons per integration window.
the signal level is increased up to 100,000 photons per integration window and the background is set equal to the signal level, i.e. 100,000 spurious ignitions. Again, the analytical trend very well matches the Monte Carlo trials. Moreover it can be noticed that the DST timing is better than the original LST one, with a percentage error of about 1% and 0.45%, respectively, at 10 m distance. From Eq. (3) and Eq. (7), the precision depends both on signal photons (Q0 or Q2 + Q3) and background photons Qb: more background photons lead to worse precision. Moreover, this is particularly evident near the boundaries of the measurement range, where the contribution of the signal photons to at least one of the signal samples (Q0, Q2 or Q3) tends to be negligible compared to the background ones. This is the reason of the dramatic change between the situations depicted in Fig. 5 and in Fig. 6. Consider for instance the DST technique.

4. Measurements

We performed different experimental measurements to check and validate both iTOF approaches. We employed a prototype camera for 2D photon counting [16

16. S. Bellisai, F. Guerrieri, S. Tisa, and F. Zappa, “3D ranging with a single-photon imaging array,” Proc. of SPIE Conference on Sensors, Cameras, and Systems XII, 78750M (2011).

,17

17. See SPC2 module data-sheet by MPD srl, http://www.micro-photon-devices.com/products_spc2.asp.

] modified for the purpose. The 2D sensor is a square array of 32 × 32 pixels [18

18. F. Guerrieri, S. Tisa, A. Tosi, and F. Zappa, “Two-dimensional SPAD imaging camera for photon counting,” IEEE Photonics J. 2(5), 759–774 (2010). [CrossRef]

], each with a CMOS Single-Photon Avalanche Diode (SPAD) detector with 20 µm diameter [19

19. S. Tisa, F. Guerrieri, A. Tosi, and F. Zappa, “100 kframe/s 8 bit monolithic single-photon imagers,” Proceedings of the 38th European Solid-State Device Research Conference, 274–277 (2008).

]. A SPAD is a photodiode made in a p-n junction biased well above the breakdown voltage: in steady state conditions, no current flows through the SPAD [20

20. S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt. 35(12), 1956–1976 (1996). [CrossRef] [PubMed]

]. When a photon is absorbed within the SPAD active volume, an avalanche current in the milliamperes range is triggered and the sensing front-end provides a digital pulse (like in Geiger counters), which increment an in-pixel digital counter. At the end of the frame, the counts from all pixels are read-out by an FPGA on-board with the camera. Thanks to a gate input, the CMOS SPAD sensor can be enabled to count within user defined time-windows, as requested in Fig. 2 and Fig. 3 for the LST approach, but similarly also for the DST one.

The 2D SPAD camera is shown in Fig. 7
Fig. 7 3D acquisition system, based on a 2D camera employing a 32 × 32 pixel CMOS SPAD array (left), equipped with a pulsed-light illuminator based on LEDs for pulsed-light (right).
left, while the laser-based illuminator that allows to exploit the SPAD camera as an iTOF 3D camera is shown on the right. The images from the 1024 pixels are easily uploaded to a remote computer through the USB 2.0 link.

We used a custom laser source illumination, able to emit pulses with duration of 150 ns (corresponding to a maximum depth-range D of 22.5 m) and a period of 600 ns (duty-cycle equal to 25%), with 750 mW peak power at 808 nm. In order to decrease background and improve measurements precision, we inserted a narrow band-pass filter in front of the camera. Both methods were tested with a frame-rate of 10 fps, i.e. the measured distance was computed every 100 ms. A total number of 500 frames were collected to properly compute the mean and standard deviation of the measurements. We considered 15 distances, within the maximum depth range of 22.5 m.

As already discussed in Section 3, in order to avoid any dependence of the acquisitions on the scene under observation, and to guarantee a fair comparison between the LST and DST methods, we introduced a time delay between the camera sync-out clock and the laser trigger-in, by means of a programmable delayer, corresponding to the distance under investigation. In this way the same light levels (both background and signal) are maintained for all distances. We also used neutral density filters to reduce the amount of light and avoid pixel saturation.

At 10 fps (frames per second), i.e. at a frame duration of 100 ms, the camera acquired a background level of about 80 photons per frame, while the signal was about 2,600 photons per frame. Figure 8
Fig. 8 Standard deviation of measurements for LST (left) and DST (right) methods, at the same detection conditions. The DST method shows better precision along the whole depth range.
shows how the analytical model fits very well with the actual standard deviation measured over the whole depth range. The measured input-output characteristics for both methods are shown in Fig. 9
Fig. 9 Computed (i.e. measured) vs. real distance for LST and DST pulsed-light iTOF methods.
, compared to the ideal characteristics. Figure 10
Fig. 10 Relative error for DST and LST pulsed-light iTOF methods.
shows the relative error between expected and measured distances for both methods: linearity is always better than 6% for both methods over the 21 m depth range.

Regarding Fig. 9, the saturation of the characteristic at the end of the range is due to the non-ideal shape of the square excitation light pulse, as clearly proved by Fig. 11
Fig. 11 Measured optical pulse (blue dots) vs. ideal one (red solid line).
, with rising- and falling-edges far from being ideally steep. In fact, at longer distances d (i.e. longer ΔtTOF delays), both time windows W1 (see Fig. 1) and W3 (see Fig. 4) collect lower and lower signal due to the slow rising-edge of the emitted light (see the initial part of the excitation pulse in Fig. 11).

In order to verify the correctness of the analysis in a real environment, we acquired different scenes, like the one shown in Fig. 12
Fig. 12 Scene under investigation, acquired with a standard color camera.
, with a car at about 5 m from the camera, with a rear door open and the background wall placed at about 22 m. Again we employed the 3D camera running at 10 fps and compared the distance computed for both LST and DST pulsed-light iTOF approaches. Figure 13
Fig. 13 Gray-scale precision (i.e. standard deviation) map of the measurements per each pixel of the sensor. Darker pixels show better precision (i.e. lower uncertainty). The DST method (right) shows both better precision and better uniformity compared to the LST one (left).
shows the 3D image averaged over 100 frames. Though the image quality is drastically impaired by the limited number of pixels (1,024) of the SPAD array employed, it is anyhow possible to resolve the rear wall through the window panes of the car and the seats inside the car.

In order to better appreciate the performances of both approaches, we computed the standard deviation of the measured distances for every pixel of the image over 100 frames. The precision map is shown in Fig. 14
Fig. 14 Gray-scale 3D distance maps of the scene shown in Fig. 12 with a car at 5 m from the camera, for both the LST (left) and DST (right) methods.
. Note that it does not provide the 3D image of the scene, but the level of uncertainty of the computed distance, at each pixel position: darker pixels correspond to lower standard deviation, i.e. better depth precision. As expected, the DST method provides better and more uniform precision compared to the original LST approach: the improvement in depth precision ranges from 1.5 to 3 times for all pixels. Figure 15
Fig. 15 Some frames of a movie running at 25 fps in an indoor ambient: a person is walking forth and back from the camera, from 4 m to 14 m. Brighter pixels represent longer distances. The movie was acquired with the DST approach and the 32x32 pixels SPAD camera.
shows few frames from a sample movie acquired by the DST p-iTOF technique and the 1,024 pixel SPAD camera. A person is walking back and forth from about 4 m to 14 m in a room.

5. Conclusions

We discussed a double-sampling pulsed-light indirect Time-of-Flight method for 3D ranging measurement. The proper choice of photon-counting integration time-windows and the resulting “correlation” between samples allows to improve precision as compared to the traditional timing pattern. The achieved depth precision is between 5 and 10% in standard conditions with a simple light source emitting 750 mW peak power through 808 nm lasers, capable to easily reach a distance of 20 m. A potential application of this 3D camera and acquisition technique is in the automotive field, for front and rear monitoring of the obstacles, for very sensitive pre-crash safety systems.

Acknowledgments

The research was partially funded by the “MiSPiA” project, under the ICT theme of the EU Seventh Framework Programme (FP7, 2007-2013) under grant agreement n° 257646.

References and links

1.

A. Leone, G. Diraco, and P. Siciliano, “Detecting falls with 3D range camera in ambient assisted living applications: A preliminary study,” Med. Eng. Phys. 33(6), 770–781 (2011). [CrossRef] [PubMed]

2.

N. J. Krichel, A. McCarthy, A. M. Wallace, J. Ye, and G. S. Buller, “Long-range depth imaging using time-correlated single-photon counting,” Proc. SPIE 7780, 77801I, 77801I-12 (2010). [CrossRef]

3.

J. R. Bruzzi, K. Strohbehn, B. G. Boone, S. Kerem, R. S. Layman, and M. W. Noble, “A compact laser altimeter for spacecraft landing applications,” Johns Hopkins APL Tech. Dig. 30, 331–345 (2012).

4.

P. Mengel, L. Listl, B. König, C. Toepfer, M. Pellkofer, W. Brockherde, B. Hosticka, O. Elkhalili, O. Schrey, and W. Ulfig, “Three-dimensional CMOS image sensor for pedestrian protection and collision mitigation,” Adv. Microsyst. Automotive Appl. 2, 23–39 (2006).

5.

S. May, B. Werner, H. Surmann, and K. Pervölz, “3D time-of-flight cameras for mobile robotics,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE/RSJ, 2006), pp. 790–795.

6.

F. Chiabrando, R. Chiabrando, D. Piatti, and F. Rinaudo, “Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera,” Sensors (Basel) 9(12), 10080–10096 (2009). [CrossRef] [PubMed]

7.

F. Rinaudo, F. Chiabrando, F. Nex, and D. Piatti, “New instruments and technologies for cultural heritage survey: full integration between point clouds and digital photogrammetry,” Lect. Notes Comput. Sci. 6436, 56–70 (2010). [CrossRef]

8.

N. Cottini, M. De Nicola, M. Gottardi, and R. Manduchi, “A low-power stereo vision system based on a custom CMOS imager with positional data coding,” 2011 7th Conference on Ph.D. Research in Microelectronics and Electronics (PRIME) (2011), pp. 161–164.

9.

Y. Sooyeong, S. Jinho, H. Youngjin, and D. Hwang, “Active ranging system based on structured laser light image,” Proceedings of SICE Annual Conference 2010 (2010), pp. 747–752.

10.

D. Stoppa and A. Simoni, “Single-photon detectors for time-of-flight range imaging,” in Single-Photon Imaging, 1st ed, P. Seitz and A. J. P. Theuwissen, eds. (Springer, Berlin, 2011), pp. 275–300.

11.

B. Markovic, S. Tisa, F.A. Villa, A. Tosi, and F. Zappa, “A high-linearity, 17 ps precision time-to-digital coverter based on a single-stage delay Vernier loop fine interpolation,” IEEE Trans. Circuits . Syst. I. Reg. Pap. 99, 1-13 (2013).

12.

M. Crotti, I. Rech, and M. Ghioni, “Four channel, 40 ps resolution, fully integrated time-to-amplitude converter for time-resolved photon counting,” IEEE J. Solid-state Circuits 47(3), 699–708 (2012). [CrossRef]

13.

J. S. Massa, G. S. Buller, A. C. Walker, S. Cova, M. Umasuthan, and A. M. Wallace, “Time-of-flight optical ranging system based on time-correlated single-photon counting,” Appl. Opt. 37(31), 7298–7304 (1998). [CrossRef] [PubMed]

14.

R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron. 37(3), 390–397 (2001). [CrossRef]

15.

M. L. Hafiane, W. Wagner, Z. Dibi, and O. Manck, “Analysis and estimation of NEP and DR in CMOS TOF-3D image sensor based on MDSI,” Sens. Actuators A Phys. 169(1), 66–73 (2011). [CrossRef]

16.

S. Bellisai, F. Guerrieri, S. Tisa, and F. Zappa, “3D ranging with a single-photon imaging array,” Proc. of SPIE Conference on Sensors, Cameras, and Systems XII, 78750M (2011).

17.

See SPC2 module data-sheet by MPD srl, http://www.micro-photon-devices.com/products_spc2.asp.

18.

F. Guerrieri, S. Tisa, A. Tosi, and F. Zappa, “Two-dimensional SPAD imaging camera for photon counting,” IEEE Photonics J. 2(5), 759–774 (2010). [CrossRef]

19.

S. Tisa, F. Guerrieri, A. Tosi, and F. Zappa, “100 kframe/s 8 bit monolithic single-photon imagers,” Proceedings of the 38th European Solid-State Device Research Conference, 274–277 (2008).

20.

S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt. 35(12), 1956–1976 (1996). [CrossRef] [PubMed]

OCIS Codes
(030.5260) Coherence and statistical optics : Photon counting
(040.1240) Detectors : Arrays
(040.1490) Detectors : Cameras
(110.2970) Imaging systems : Image detection systems
(110.6880) Imaging systems : Three-dimensional image acquisition
(040.1345) Detectors : Avalanche photodiodes (APDs)

ToC Category:
Detectors

History
Original Manuscript: November 6, 2012
Revised Manuscript: January 18, 2013
Manuscript Accepted: January 25, 2013
Published: February 22, 2013

Citation
S. Bellisai, D. Bronzi, F. A. Villa, S. Tisa, A. Tosi, and F. Zappa, "Single-photon pulsed-light indirect time-of-flight 3D ranging," Opt. Express 21, 5086-5098 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-4-5086


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. A. Leone, G. Diraco, and P. Siciliano, “Detecting falls with 3D range camera in ambient assisted living applications: A preliminary study,” Med. Eng. Phys.33(6), 770–781 (2011). [CrossRef] [PubMed]
  2. N. J. Krichel, A. McCarthy, A. M. Wallace, J. Ye, and G. S. Buller, “Long-range depth imaging using time-correlated single-photon counting,” Proc. SPIE7780, 77801I, 77801I-12 (2010). [CrossRef]
  3. J. R. Bruzzi, K. Strohbehn, B. G. Boone, S. Kerem, R. S. Layman, and M. W. Noble, “A compact laser altimeter for spacecraft landing applications,” Johns Hopkins APL Tech. Dig.30, 331–345 (2012).
  4. P. Mengel, L. Listl, B. König, C. Toepfer, M. Pellkofer, W. Brockherde, B. Hosticka, O. Elkhalili, O. Schrey, and W. Ulfig, “Three-dimensional CMOS image sensor for pedestrian protection and collision mitigation,” Adv. Microsyst. Automotive Appl.2, 23–39 (2006).
  5. S. May, B. Werner, H. Surmann, and K. Pervölz, “3D time-of-flight cameras for mobile robotics,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE/RSJ, 2006), pp. 790–795.
  6. F. Chiabrando, R. Chiabrando, D. Piatti, and F. Rinaudo, “Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera,” Sensors (Basel)9(12), 10080–10096 (2009). [CrossRef] [PubMed]
  7. F. Rinaudo, F. Chiabrando, F. Nex, and D. Piatti, “New instruments and technologies for cultural heritage survey: full integration between point clouds and digital photogrammetry,” Lect. Notes Comput. Sci.6436, 56–70 (2010). [CrossRef]
  8. N. Cottini, M. De Nicola, M. Gottardi, and R. Manduchi, “A low-power stereo vision system based on a custom CMOS imager with positional data coding,” 2011 7th Conference on Ph.D. Research in Microelectronics and Electronics (PRIME) (2011), pp. 161–164.
  9. Y. Sooyeong, S. Jinho, H. Youngjin, and D. Hwang, “Active ranging system based on structured laser light image,” Proceedings of SICE Annual Conference 2010 (2010), pp. 747–752.
  10. D. Stoppa and A. Simoni, “Single-photon detectors for time-of-flight range imaging,” in Single-Photon Imaging, 1st ed, P. Seitz and A. J. P. Theuwissen, eds. (Springer, Berlin, 2011), pp. 275–300.
  11. B. Markovic, S. Tisa, F.A. Villa, A. Tosi, and F. Zappa, “A high-linearity, 17 ps precision time-to-digital coverter based on a single-stage delay Vernier loop fine interpolation,” IEEE Trans. Circuits . Syst. I. Reg. Pap.99, 1-13 (2013).
  12. M. Crotti, I. Rech, and M. Ghioni, “Four channel, 40 ps resolution, fully integrated time-to-amplitude converter for time-resolved photon counting,” IEEE J. Solid-state Circuits47(3), 699–708 (2012). [CrossRef]
  13. J. S. Massa, G. S. Buller, A. C. Walker, S. Cova, M. Umasuthan, and A. M. Wallace, “Time-of-flight optical ranging system based on time-correlated single-photon counting,” Appl. Opt.37(31), 7298–7304 (1998). [CrossRef] [PubMed]
  14. R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron.37(3), 390–397 (2001). [CrossRef]
  15. M. L. Hafiane, W. Wagner, Z. Dibi, and O. Manck, “Analysis and estimation of NEP and DR in CMOS TOF-3D image sensor based on MDSI,” Sens. Actuators A Phys.169(1), 66–73 (2011). [CrossRef]
  16. S. Bellisai, F. Guerrieri, S. Tisa, and F. Zappa, “3D ranging with a single-photon imaging array,” Proc. of SPIE Conference on Sensors, Cameras, and Systems XII, 78750M (2011).
  17. See SPC2 module data-sheet by MPD srl, http://www.micro-photon-devices.com/products_spc2.asp .
  18. F. Guerrieri, S. Tisa, A. Tosi, and F. Zappa, “Two-dimensional SPAD imaging camera for photon counting,” IEEE Photonics J.2(5), 759–774 (2010). [CrossRef]
  19. S. Tisa, F. Guerrieri, A. Tosi, and F. Zappa, “100 kframe/s 8 bit monolithic single-photon imagers,” Proceedings of the 38th European Solid-State Device Research Conference, 274–277 (2008).
  20. S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt.35(12), 1956–1976 (1996). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited