OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 23 — Nov. 18, 2013
  • pp: 28583–28596
« Show journal navigation

Fast compressed sensing analysis for super-resolution imaging using L1-homotopy

Hazen P. Babcock, Jeffrey R. Moffitt, Yunlong Cao, and Xiaowei Zhuang  »View Author Affiliations


Optics Express, Vol. 21, Issue 23, pp. 28583-28596 (2013)
http://dx.doi.org/10.1364/OE.21.028583


View Full Text Article

Acrobat PDF (1746 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In super-resolution imaging techniques based on single-molecule switching and localization, the time to acquire a super-resolution image is limited by the maximum density of fluorescent emitters that can be accurately localized per imaging frame. In order to increase the imaging rate, several methods have been recently developed to analyze images with higher emitter densities. One powerful approach uses methods based on compressed sensing to increase the analyzable emitter density per imaging frame by several-fold compared to other reported approaches. However, the computational cost of this approach, which uses interior point methods, is high, and analysis of a typical 40 µm x 40 µm field-of-view super-resolution movie requires thousands of hours on a high-end desktop personal computer. Here, we demonstrate an alternative compressed-sensing algorithm, L1-Homotopy (L1H), which can generate super-resolution image reconstructions that are essentially identical to those derived using interior point methods in one to two orders of magnitude less time depending on the emitter density. Moreover, for an experimental data set with varying emitter density, L1H analysis is ~300-fold faster than interior point methods. This drastic reduction in computational time should allow the compressed sensing approach to be routinely applied to super-resolution image analysis.

© 2013 Optical Society of America

1. Introduction

However, because individual emitters are activated stochastically and observed as diffraction limited spots, the maximum density of resolvable emitters per imaging frame is limited. This emitter density limit in turn sets the number of camera frames that are required to construct a super-resolution image and, hence, the temporal resolution of STORM imaging. Thus, a lower permissible emitter density per frame may restrict the ability to follow processes in dynamic environments such as live cells [4

4. H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5(5), 417–423 (2008). [CrossRef] [PubMed]

,6

6. S. A. Jones, S.-H. Shim, J. He, and X. Zhuang, “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods 8(6), 499–505 (2011). [CrossRef] [PubMed]

,8

8. S.-H. Shim, C. Xia, G. Zhong, H. P. Babcock, J. C. Vaughan, B. Huang, X. Wang, C. Xu, G.-Q. Bi, and X. Zhuang, “Super-resolution fluorescence imaging of organelles in live cells with photoswitchable membrane probes,” Proc. Natl. Acad. Sci. U.S.A. 109(35), 13978–13983 (2012). [CrossRef] [PubMed]

]. In parallel, the inability to resolve overlapping fluorophores may also lead to underestimates of the number of emitters from regions of higher density, effectively limiting the dynamic range of the STORM image. For these reasons, better methods for handling overlapping emitters could substantially improve the performance of STORM imaging.

Recently, several methods have been introduced to resolve partially overlapping emitter images [9

9. S. J. Holden, S. Uphoff, and A. N. Kapanidis, “DAOSTORM: an algorithm for high- density super-resolution microscopy,” Nat. Methods 8(4), 279–280 (2011). [CrossRef] [PubMed]

15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

]. Among these methods, one powerful approach [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

] allows the highest density of emitters to be localized to date by adopting techniques from the rapidly advancing field of compressed sensing. In the basic compressed sensing problem, an under-determined, sparse signal vector, i.e. a signal in which only a few basis elements are non-zero, is reconstructed from a noisy measurement in a basis in which the signal is not sparse. In the context of STORM, the sparse basis is a high resolution grid, in which fluorophore locations are presented—effectively an up-sampled, reconstructed image—while the noisy measurement basis is the lower resolution camera pixels, on which fluorescence signal are detected experimentally [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

]. In this framework, the optimal reconstructed image is the one that contains the fewest number of fluorophores but reproduces the measured image on the camera to a given accuracy when convolved with the optical point-spread-function (PSF) (Fig. 1).
Fig. 1 Schematic depiction of STORM imaging using compressed sensing. A subset of fluorescent emitters (red dots; top left) from a labeled sample (bottom left) are activated stochastically. The activated emitters are close enough that the individual emitters are not distinguishable in a 4 pixel × 4 pixel conventional image (top left). Using compressed sensing, a high resolution grid of fluorophore locations is reconstructed (top right) from the low resolution image (top left) under the constraint that this reconstruction contains the smallest number of emitters that can reproduce the measured conventional image up to a given accuracy. This process is repeated and the individual reconstructed frames are summed to produce the final high resolution reconstruction (lower right) of the original sample.
Remarkably, this technique can accurately process emitter densities 4-6 fold higher than what could be handled by a sparse emitter fitting algorithm, which can only analyze non-overlapping images of emitters, and approximately 2-3 fold higher than what could be analyzed by other overlapping-emitter-fitting algorithms [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

]. However, the interior point method used in the reported compressed sensing approach [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

] is computationally intensive and, by our estimate, reconstruction of a typical STORM image of a 40 µm × 40 µm field of view using this approach would take a few months on a typical research computer.

2. Theory

In the compressed sensing approach to STORM image reconstruction, the reconstructed image is a high resolution grid of fluorophore locations that, when convolved with the optical PSF, reproduces the measured low resolution image on the camera to an accuracy set by the expected noise in the image. Mathematically, this reconstruction can be achieved by solving the following constrained-minimization problem:
Minimize:x1Subjectto:Axb2ε.
(1)
Where x is the up-sampled, reconstructed image, b is the experimentally observed image, and Ais the PSF matrix (of size M × N, where M and N are the numbers of pixels in b and x, respectively). x1=i|xi| is the L1 norm of the up-sampled image, and its minimization generally enforces the sparsity of the reconstructed image. Axb2=i(Axb)i2 is the L2 norm of the difference between the measured image and the reconstructed image, and it measures the accuracy of the reconstructed image. The inequality constraint on the L2 norm allows some inaccuracy in the image reconstruction to accommodate the statistical corruption of the image by noise [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

].

Equation (1) was previously solved using interior point methods [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

]—iterative algorithms broadly applicable to a large class of minimization problems. Unfortunately, at each iteration such methods must invert an N × N matrix—an operation with complexity that scales as O(N3). As N is typically of order 103 – 104, 109 – 1012 floating point operations are needed for each iteration. In practice, reconstructing a ~40 µm × 40 µm STORM image with ~20 nm resolution using Eq. (1) and interior point methods takes about 30 minutes per camera frame on a high-end desktop PC. Since the typical movie used to construct a single STORM image contains thousands to tens of thousands of frames, it would take thousands of hours to reconstruct a typical STORM image. The long computational time has, thus, limited the use of compressed sensing in the reconstruction of STORM images to small fields of view, for example ~3 µm × 3 µm [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

].

However, interior point methods are not necessarily the most efficient way to solve this minimization problem, and there is a rapidly growing collection of alternative methods [19

19. A. Y. Yang and S. S. Sastry, “Fast l1-minimization algorithms and an application in robust face recognition: a review,” Proceedings of 2010 IEEE 17th International Conference on Image Processing, 1849–1852 (2010). [CrossRef]

, 20

20. A. Y. Yang, Z. Zhou, A. Ganesh, S. S. Shankar, and Y. Ma, “Fast L1-Minimization Algorithms For Robust Face Recognition,” IEEE Trans. Image Process. 22(8), 3234–3246 (2012).

]. Many of these alternative algorithms solve the following minimization problem:
Minimize:Axb22+λx1.
(2)
This problem is equivalent to that of Eq. (1) as for every value of the noise parameter ε there exists an equivalent value of λ for which the solutions are identical. In practice, solving Eq. (2) is often computationally simpler than solving Eq. (1) which is the reason why many alternative algorithms have been reported to have faster computational speed than interior point methods [19

19. A. Y. Yang and S. S. Sastry, “Fast l1-minimization algorithms and an application in robust face recognition: a review,” Proceedings of 2010 IEEE 17th International Conference on Image Processing, 1849–1852 (2010). [CrossRef]

, 20

20. A. Y. Yang, Z. Zhou, A. Ganesh, S. S. Shankar, and Y. Ma, “Fast L1-Minimization Algorithms For Robust Face Recognition,” IEEE Trans. Image Process. 22(8), 3234–3246 (2012).

].

3. Methods

Algorithm and Image Analysis. The L1H algorithm used here was written in C with a Python interface and is based on the approach described by Donoho and Tsiag [18

18. D. L. Donoho and Y. Tsaig, “Fast Solution of the L1-norm Minimization Problem When the Solution May be Sparse,” IEEE Trans. Inf. Theory 54(11), 4789–4812 (2008). [CrossRef]

] with two modifications. First, because negative photon numbers are unphysical, we restrict the algorithm so that it only considers non-negative values for x. Second, we modified this algorithm so that it stops when the appropriate value of λ has been identified, as discussed above. The FISTA algorithm used here was also written in C with a Python interface following [22

22. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2(1), 183–202 (2009). [CrossRef]

]. Source code and executables of the L1H and FISTA algorithms are available at http://zhuang.harvard.edu/software.html. We used the Matlab interior-point-method implementation of the compressed sensing algorithm provided in [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

] for comparisons. The core of this code is the optimized interior point methods provided in the freeware package CVX [23

23. M. C. Grant and S. P. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.0 beta,” http://cvxr.com/cvx (2013).

]. All reported data were calculated with the default precision settings of this package; however, we observe little difference in the required computation time when these settings are optimized for speed. All analysis was conducted on a single core of an i7-3770 Sandy Bridge 3.4 GHz CPU.

To limit the size of the A matrix, we divided the fluorescence images from the camera into sub-regions and analyzed these regions separately as was done previously with interior point methods [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

]. Specifically, we divided the measured images into multiple, partially overlapping 7 pixel × 7 pixel regions. To analyze each of the 7 pixel × 7 pixel regions, we allowed emitters to be placed within an 8 pixel × 8 pixel space, which corresponds to a 1/2 pixel extension in each direction from the measured region, in order to take into account the fluorescence signal contributed by emitters outside of the measured region due to the finite PSF size, as shown in Fig. 3.
Fig. 3 Analysis of a sub-region of the image by L1-Homotopy. The wide-field fluorescence image detected by the camera is first divided into partially overlapping, sub-regions of 7 × 7 camera pixels in size (a camera pixel is represented by a black box). One such region is shown here. In the reconstruction of this region, we allow emitters to be placed within an 8 × 8 camera pixel space, corresponding to a 1/2 pixel extension in each direction from the 7 × 7 camera pixel region, and the emitter positions are allowed on a finer, 8-fold up-sampled grid (red dots). To further limit errors that might arise due to edge effects, only emitter localizations located within a central 5 pixel × 5 pixel block defined by the thicker black line are kept. Multiple of these non-overlapping, 5-pixel wide, up-sampled images are then stitched together to form the final STORM image.
To further limit errors due to edge effects, we only retained emitters located within a central 5 pixel × 5 pixel region. These 5 × 5-camera-pixel up-sampled images, with neighboring images touching but not overlapping with each other, were then stitched together to form the final STORM image. In addition, following [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

], we included a background term in the A matrix that allows for non-zero uniform background correction. As the regions that are analyzed are small (7 × 7 pixels) this background correction makes the solution fairly robust to the slowly varying background typically encountered in fluorescence images.

For a noise-free detector, the noise parameter ε would be theoretically set by the Poisson noise of the photons, i.e. ε=ibi. However, it was previously shown that a slightly larger noise parameter produced higher quality reconstructions [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

]. For simulated data, the optimized noise parameter ε was shown to be 1.5 times the theoretical noise parameter. For real data collected with an electron-multiplying CCD camera, the optimized noise parameter ε was shown to be 2.1 times the theoretical noise parameter. We also used these optimized εvalues in this work.

Simulated Data. For all simulated data, we randomly distributed emitters on an 8-fold up-sampled grid and generated the number of photons for each emitter from a lognormal distribution with a mean and standard deviation of 3000 and 1700 photons, respectively. A Gaussian PSF of width equal to one camera pixel was used to convolve the signal from individual emitters, and the photons from all up-sampled pixels within a low-resolution, camera pixel were summed. Finally, a uniform background of 70 photons was added to each camera pixel and the image was corrupted with Poisson distributed noise with a mean set by the number of photons in each pixel. In all simulations, the camera pixel is assumed to be 167 nm in size. These values were chosen to match our typical experimental data. Using this approach, 7 × 7 camera pixel images were generated, and these simulated images were analyzed by both the L1H and CVX methods to compare the accuracy of the reconstructions. We also generated 256 × 256 camera pixel images to compare the computation time of these two methods. Reported computation times are the average of three independent trials.

Real Data. To test the L1H algorithm on real data, we collected images of immunostained microtubules. BS-C-1 cells were fixed with 3% paraformaldehyde and 0.1% glutaraldehyde in phosphate buffered saline (PBS) for 10 minutes at room temperature and then were treated with 0.1% sodium borohydride for 7 minutes to reduce auto-fluorescence. After blocking with 3% w/v bovine serum albumin (BSA) and 0.1% v/v Triton X-100 in PBS, the microtubules were labeled with a mouse monoclonal anti-beta-tubulin primary antibody (Sigma, T5201) followed by a donkey anti-mouse secondary antibody (Jackson Labs, 715-005-150) conjugated with Alexa Fluor 405 and Alexa Fluor 647 [13

13. H. P. Babcock, Y. M. Sigal, and X. Zhuang, “A high-density 3D localization algorithm for stochastic optical reconstruction microscopy,” Optical Nanoscopy 1(1), 6–10 (2012). [CrossRef]

]. Alexa Fluor 647 is a photoswitchable dye and Alexa Fluor 405 facilitates the photoactivation of Alexa Fluor 647. STORM images were collected on a Olympus IX-71 inverted optical microscope as described previously [24

24. M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317(5845), 1749–1753 (2007). [CrossRef] [PubMed]

, 25

25. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319(5864), 810–813 (2008). [CrossRef] [PubMed]

]. Briefly, a 647 nm laser was used to both image Alexa Fluor 647 and switch it off, and a 405 nm laser was used to reactivate the Alexa Fluor 647 to the fluorescent state. The 405 nm laser was used at low intensities such that only a subset of the Alexa Fluor 647 molecules were activated, allowing the locations of these molecules to be determined using either L1H or CVX.

We find that compressed sensing is sensitive to differences between the assumed PSF and the actual PSF; thus, we used an experimentally determined PSF to analyze the real STORM data with the L1H and CVX algorithms. The PSF of the STORM microscope was determined by analyzing the images of well separated single Alexa Fluor 647 molecules. The pixel intensity profile of the images of these molecules was fit with a cubic spline. Five hundred splines were aligned to each other at the centroid positions, and then averaged to create a single cubic spline estimate of the PSF. This spline PSF estimate was then used to create the A matrix used for this data.

4. Results

In theory, interior point methods and L1H should produce identical solutions. Nonetheless, as both algorithms are subject to stability issues and round-off errors, we first tested computationally whether the two algorithms produced identical reconstructions of STORM data. We constructed a set of simulated 7 × 7 camera pixel STORM data and compared the results derived from our L1H algorithm and the algorithm for the interior point method (CVX) [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

, 23

23. M. C. Grant and S. P. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.0 beta,” http://cvxr.com/cvx (2013).

].
Fig. 4 L1-Homotopy produces nearly identical reconstructions to interior point methods. (a-c; left) Simulated, high-density, 7 × 7 camera pixel single-frame conventional image. The locations of individual emitters are marked with the red crosses. (a-c; middle) The CVX reconstruction of the emitter locations. (a-c; right) The L1H reconstruction of the same images. The arrows in (c) highlight a rare difference between the two solutions. (d) Histogram of the distance between an emitter in the CVX reconstruction and the nearest emitter in the L1H reconstruction for the same simulated data. The histogram is plotted for three different emitter densities. (e) The average distance between every emitter in the CVX solution to the nearest emitter in the L1H solution as a function of emitter density. (f) The average fractional difference in the number of emitters found in the CVX solution and the L1H solution as a function of emitter density. (g) The average percent difference between the L1 norm of the CVX solution and the L1H solution. (f) The average percent difference between the residual image error of the CVX and the L1H solution. (e) – (f) Both the mean (blue) and median (red) are provided. Error bars represent standard deviation measured directly (blue) or estimated from the inter-quartile range (red).
Figure 4(a)-4(c) shows that the two algorithms produce reconstructions in which the vast majority of fluorophores are in the same locations. There are occasionally differences between the two reconstructions, as depicted in Fig. 4(c). However, as shown in Fig. 4(d)-4(f), these differences are exceedingly rare and when they do occur, the reconstructed positions of the fluorophores differ only by one up-sampled pixel (1/8 of the camera pixel). Moreover, we find that the L1H and CVX produce solutions that differ in the L1 norm and in the residual image error by much less than 0.1%, as shown in Fig. 4(g)-4(h). Thus, we conclude that the two algorithms produce functionally equivalent reconstructions.

To further confirm that L1H produces functionally equivalent reconstructions to CVX, we generated a series of simulated 256 × 256 camera pixel STORM movies across a range of emitter densities and analyzed them with CVX and with L1H.
Fig. 5 Comparison of the reconstructed emitter density and localization error derived from L1H, CVX, a single-emitter fitting algorithm, and a multi-emitter fitting algorithm. (a) The density of reconstructed emitters as a function of the density of simulated emitters for a single-emitter fitting algorithm (green squares), a multi-emitter fitting algorithm (blue pluses), L1H (red crosses), and CVX (black diamonds). The dashed black line has the slope of 1. (b) The XY localization error for each algorithm labeled as in panel (a). The two panels in (a) and (b) cover different density ranges.
Figure 5 reveals that L1H and CVX produce reconstructions that are effectively identical in the density and localization precision of reconstructed emitters.
Fig. 6 L1-Homotopy reconstructs images with a substantially higher speed than interior point methods, but is slower than the emitter fitting algorithms. (a) The average analysis time for a 256 × 256 camera pixel image as a function of emitter density for the two compressed sensing algorithms CVX (black diamonds) and L1H (red crosses) as well as a multi-emitter fitting algorithm (blue pluses) and a single-emitter fitting algorithm (green squares). (b) The ratio of the analysis time per frame for CVX to L1H as a function of emitter density.
However, Fig. 6 reveals that L1H produced these reconstructions in roughly 10 – 250 fold less time than CVX. The speed improvement with L1H decreases as the emitter density increases. This density dependence reflects the fact that the computation time of L1H scales with the number of emitters while that of CVX scales primarily with the number of up-sampled pixels. Based on the above results, we conclude that L1H produces the same reconstructions as CVX but in a fraction of the time.

To determine the computational speed improvement for real data, where emitter density is non-uniform but can vary dramatically within a single image, we analyzed STORM images of immunostained microtubules in mammalian cells.
Fig. 7 L1-Homotopy analysis of experimental STORM data. (a) A sub-area of a single frame of a high-emitter-density STORM data set acquired from Alexa-647-labeled microtubules in BS-C-1 cells. Individual molecules found with L1H (green circles), CVX (blue crosses), and a single-emitter localization algorithm (red crosses) are plotted. (b) Average analysis time per frame (256 × 256 camera pixel) for L1H and CVX estimated from the analysis time for the first 10 frames of a 5000 frame STORM movie of microtubules in a BSC-1 cell. CVX takes 340-fold longer than L1H to analyze these frames. (c) The full reconstructed STORM image of this data set using a single-emitter localization algorithm. (d) The same data set reconstructed with L1H. The red arrows and arrow heads indicated two regions with high and low microtubule density, respectively. (e) A zoom-in of the area outlined by the red box in (c). (f) A zoom-in of the same area outlined by the red box in (d). The image reconstructed by the L1H compressed sensing algorithm is much smoother than that by the single-emitter localization algorithm because roughly 4-fold more fluorophores are localized by the L1H algorithm. Scale bars in (c,d) are 10 µm and 1 µm in (a, e, f).
Figure 7(a) shows that for the real STORM data we again find that CVX and L1H produce essentially identical solutions. However using the time it took CVX to analyze the first 10 frames of this 5000 frame movie (which is 256 × 256 camera pixels per frame), we estimate that a complete analysis of all 5000 frames would take roughly two months using a single core of a high end PC. In contrast, L1H was able to analyze the entire 5000 frame movie in just under 3 hours using the same computer. The analysis time for the first 10 frames using L1H was 340-fold faster that using CVX, as shown in Fig. 7(b). The average emitter density in this field of view is approximately 1 μm−2; however, the speed enhancement is substantially larger than that observed for the simulated data at a uniform density of 1 μm−2 because of the large variation in the local emitter density observed in the real STORM data. Since the typical STORM image does not have a uniform emitter density, we anticipate this experimentally-derived speed enhancement to be representative of what will be achieved in the analysis of a wide range of real STORM data.

To further compare the performance between the compressed sensing approaches and other STORM image reconstruction algorithms, we also analyzed the 256 pixel × 256 camera pixel simulated data with a single-emitter localization algorithm that can only handle sparse emitter densities and with the multi-emitter fitting algorithm which can handle moderate-to-high emitter densities [9

9. S. J. Holden, S. Uphoff, and A. N. Kapanidis, “DAOSTORM: an algorithm for high- density super-resolution microscopy,” Nat. Methods 8(4), 279–280 (2011). [CrossRef] [PubMed]

, 13

13. H. P. Babcock, Y. M. Sigal, and X. Zhuang, “A high-density 3D localization algorithm for stochastic optical reconstruction microscopy,” Optical Nanoscopy 1(1), 6–10 (2012). [CrossRef]

]. The comparison of the performance of these algorithms as a function of emitter density in Figs. 5 and 6 illustrates the benefits and potential limitations of a compressed sensing approach with respect to other algorithms. Above a density of ~0.3 emitters per μm−2, the single-emitter localization algorithm begins to miss a substantial fraction of emitters, and above an intermediate density of ~3 emitters per μm−2 the multi-emitter fitting algorithm also begins to miss a substantial number of emitters. However, L1H and CVX both reconstruct the vast majority of the emitters up to a density of 5-6 emitters per μm−2, as shown in Fig. 5(a). Thus, for the highest emitter densities, a compressed sensing approach is required to reconstruct most emitters.

We also quantified the accuracy with which these emitters are reconstructed, as seen in Fig. 5(b). At low densities, the multi-emitter fitting algorithm has the lowest localization error because it employs maximum likelihood estimation as opposed to the least squares minimization [27

27. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods 7(5), 377–381 (2010). [CrossRef] [PubMed]

] employed by our single-emitter fitting algorithm. This algorithm continues to outperform compressed sensing in terms of localization error up to an intermediate density of 3-4 emitters per μm−2. Beyond this density, a compressed sensing approach is superior. These results are consistent with those reported previously [15

15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

]. However it is important to note that this improvement in high-density image reconstruction comes at a cost. Compressed sensing requires slightly more analysis time than multi-emitter fitting, even when using the significantly faster L1H algorithm. See Fig. 6(a).

Fig. 8 A comparison of different analysis methods on simulated STORM data. (a) A single frame of the STORM movie. (b) The true locations of the emitters used for the simulation. (c) The STORM image reconstructed using a single-emitter fitting algorithm that can only handle sparse emitter densites (d) The STORM image reconstructed using a multi-emitter fitting algorithm, DAOSTORM, that can handle moderate-to-high emitter densities. (e) The STORM image reconstructed using the deconSTORM algorithm. (f) The STORM image reconstructed using the L1H algorithm. All scale bars are 500 nm. The simulated STORM movie had an average density of 5 emitters/μm2 and was 200 frames long.
Finally, to qualitatively capture and summarize these benefits, we present the results on a simulated STORM data with variable emitter density derived from several different STORM image analysis methods, including a single-emitter fitting algorithm, DAOSTORM (a multi-emitter fitting algorithm), deconSTORM (a recently reported deconvolution analysis method [14

14. E. A. Mukamel, H. P. Babcock, and X. Zhuang, “Statistical Deconvolution for Superresolution Fluorescence Microscopy,” Biophys. J. 102(10), 2391–2400 (2012). [CrossRef] [PubMed]

],) and L1H. As seen in Fig. 8, in the regions of low density all algorithms produce similar quality images, but in regions of high emitter densities, the single- and multi-emitter fitting algorithms miss most of the emitters. deconSTORM and compressed sensing retain these emitters in the high-density regions and, thus, more faithfully reflect the dynamic range of the image. In addition, compressed sensing produces the sharpest reconstructed images within the high-density regions. This ability to accurately process data with a large range of emitter densities is a key advantage of compressed sensing approaches. Thus, while compressed sensing is still a computationally expensive analysis method, with the use of L1H it should now be practical to routinely apply this method. Moreover, compressed sensing is a versatile analysis method, and it should be possible to modify the A matrix to include PSFs of variable size and shape, which will in turn extend compressed sensing to the analysis of three dimensional data. The significant decrease in computational complexity afforded by L1H will partially offset the substantial increase in computational complexity of 3D implementations of compressed sensing. Finally, it may be possible to further increase computational speed by implementing L1H on GPUs or clusters of CPUs.

5. Conclusions

Acknowledgments

We thank Sang-Hee Shim for assistance with collecting the STORM data of microtubules. This work was supported by the National Institutes of Health (NIH) and the Gatsby Charitable Foundation. J.R.M. was supported by a Helen Hay Whitney Postdoctoral Fellowship. X.Z. is a Howard Hughes Medical Institute Investigator.

References and Links

1.

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef] [PubMed]

2.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef] [PubMed]

3.

S. T. Hess, T. P. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef] [PubMed]

4.

H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5(5), 417–423 (2008). [CrossRef] [PubMed]

5.

B. Huang, H. P. Babcock, and X. Zhuang, “Breaking the diffraction barrier: Super-resolution imaging of cells,” Cell 143(7), 1047–1058 (2010). [CrossRef] [PubMed]

6.

S. A. Jones, S.-H. Shim, J. He, and X. Zhuang, “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods 8(6), 499–505 (2011). [CrossRef] [PubMed]

7.

R. P. J. Nieuwenhuizen, K. A. Lidke, M. Bates, D. L. Puig, D. Grünwald, S. Stallinga, and B. Rieger, “Measuring image resolution in optical nanoscopy,” Nat. Methods 10(6), 557–562 (2013). [CrossRef] [PubMed]

8.

S.-H. Shim, C. Xia, G. Zhong, H. P. Babcock, J. C. Vaughan, B. Huang, X. Wang, C. Xu, G.-Q. Bi, and X. Zhuang, “Super-resolution fluorescence imaging of organelles in live cells with photoswitchable membrane probes,” Proc. Natl. Acad. Sci. U.S.A. 109(35), 13978–13983 (2012). [CrossRef] [PubMed]

9.

S. J. Holden, S. Uphoff, and A. N. Kapanidis, “DAOSTORM: an algorithm for high- density super-resolution microscopy,” Nat. Methods 8(4), 279–280 (2011). [CrossRef] [PubMed]

10.

F. Huang, S. L. Schwartz, J. M. Byars, and K. A. Lidke, “Simultaneous multiple-emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express 2(5), 1377–1393 (2011). [CrossRef] [PubMed]

11.

S. Cox, E. Rosten, J. Monypenny, T. Jovanovic-Talisman, D. T. Burnette, J. Lippincott-Schwartz, G. E. Jones, and R. Heintzmann, “Bayesian localization microscopy reveals nanoscale podosome dynamics,” Nat. Methods 9(2), 195–200 (2011). [CrossRef] [PubMed]

12.

T. Quan, H. Zhu, X. Liu, Y. Liu, J. Ding, S. Zeng, and Z.-L. Huang, “High-density localization of active molecules using Structured Sparse Model and Bayesian Information Criterion,” Opt. Express 19(18), 16963–16974 (2011). [CrossRef] [PubMed]

13.

H. P. Babcock, Y. M. Sigal, and X. Zhuang, “A high-density 3D localization algorithm for stochastic optical reconstruction microscopy,” Optical Nanoscopy 1(1), 6–10 (2012). [CrossRef]

14.

E. A. Mukamel, H. P. Babcock, and X. Zhuang, “Statistical Deconvolution for Superresolution Fluorescence Microscopy,” Biophys. J. 102(10), 2391–2400 (2012). [CrossRef] [PubMed]

15.

L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef] [PubMed]

16.

M. R. Osborne, B. Presnell, and B. A. Turlach, “A new approach to variable selection in least squares problems,” IMA J. Numer. Anal. 20(3), 389–403 (2000). [CrossRef]

17.

D. M. Malioutov, M. Cetin, and A. S. Willsky, “Homotopy continuation for sparse signal representation,” ICASSP 5, 733–736 (2005).

18.

D. L. Donoho and Y. Tsaig, “Fast Solution of the L1-norm Minimization Problem When the Solution May be Sparse,” IEEE Trans. Inf. Theory 54(11), 4789–4812 (2008). [CrossRef]

19.

A. Y. Yang and S. S. Sastry, “Fast l1-minimization algorithms and an application in robust face recognition: a review,” Proceedings of 2010 IEEE 17th International Conference on Image Processing, 1849–1852 (2010). [CrossRef]

20.

A. Y. Yang, Z. Zhou, A. Ganesh, S. S. Shankar, and Y. Ma, “Fast L1-Minimization Algorithms For Robust Face Recognition,” IEEE Trans. Image Process. 22(8), 3234–3246 (2012).

21.

E. van den Berg and M. P. Friedlander, “Probing the Pareto frontier for basis pursuit solutions,” SIAM J. Sci. Comput. 31(2), 890–912 (2009). [CrossRef]

22.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2(1), 183–202 (2009). [CrossRef]

23.

M. C. Grant and S. P. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.0 beta,” http://cvxr.com/cvx (2013).

24.

M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317(5845), 1749–1753 (2007). [CrossRef] [PubMed]

25.

B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319(5864), 810–813 (2008). [CrossRef] [PubMed]

26.

M. Grant, S. Boyd, and Y. Ye, “CVX: Matlab Software for Disciplined Convex Programming, Version 1.0 beta 3,” Recent Advances in Learning and Control}, 95-110 (2006).

27.

K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods 7(5), 377–381 (2010). [CrossRef] [PubMed]

OCIS Codes
(100.6640) Image processing : Superresolution
(110.2960) Imaging systems : Image analysis
(170.2520) Medical optics and biotechnology : Fluorescence microscopy

ToC Category:
Image Processing

History
Original Manuscript: September 3, 2013
Revised Manuscript: November 4, 2013
Manuscript Accepted: November 6, 2013
Published: November 13, 2013

Virtual Issues
Vol. 9, Iss. 1 Virtual Journal for Biomedical Optics

Citation
Hazen P. Babcock, Jeffrey R. Moffitt, Yunlong Cao, and Xiaowei Zhuang, "Fast compressed sensing analysis for super-resolution imaging using L1-homotopy," Opt. Express 21, 28583-28596 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-23-28583


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods3(10), 793–796 (2006). [CrossRef] [PubMed]
  2. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science313(5793), 1642–1645 (2006). [CrossRef] [PubMed]
  3. S. T. Hess, T. P. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J.91(11), 4258–4272 (2006). [CrossRef] [PubMed]
  4. H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods5(5), 417–423 (2008). [CrossRef] [PubMed]
  5. B. Huang, H. P. Babcock, and X. Zhuang, “Breaking the diffraction barrier: Super-resolution imaging of cells,” Cell143(7), 1047–1058 (2010). [CrossRef] [PubMed]
  6. S. A. Jones, S.-H. Shim, J. He, and X. Zhuang, “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods8(6), 499–505 (2011). [CrossRef] [PubMed]
  7. R. P. J. Nieuwenhuizen, K. A. Lidke, M. Bates, D. L. Puig, D. Grünwald, S. Stallinga, and B. Rieger, “Measuring image resolution in optical nanoscopy,” Nat. Methods10(6), 557–562 (2013). [CrossRef] [PubMed]
  8. S.-H. Shim, C. Xia, G. Zhong, H. P. Babcock, J. C. Vaughan, B. Huang, X. Wang, C. Xu, G.-Q. Bi, and X. Zhuang, “Super-resolution fluorescence imaging of organelles in live cells with photoswitchable membrane probes,” Proc. Natl. Acad. Sci. U.S.A.109(35), 13978–13983 (2012). [CrossRef] [PubMed]
  9. S. J. Holden, S. Uphoff, and A. N. Kapanidis, “DAOSTORM: an algorithm for high- density super-resolution microscopy,” Nat. Methods8(4), 279–280 (2011). [CrossRef] [PubMed]
  10. F. Huang, S. L. Schwartz, J. M. Byars, and K. A. Lidke, “Simultaneous multiple-emitter fitting for single molecule super-resolution imaging,” Biomed. Opt. Express2(5), 1377–1393 (2011). [CrossRef] [PubMed]
  11. S. Cox, E. Rosten, J. Monypenny, T. Jovanovic-Talisman, D. T. Burnette, J. Lippincott-Schwartz, G. E. Jones, and R. Heintzmann, “Bayesian localization microscopy reveals nanoscale podosome dynamics,” Nat. Methods9(2), 195–200 (2011). [CrossRef] [PubMed]
  12. T. Quan, H. Zhu, X. Liu, Y. Liu, J. Ding, S. Zeng, and Z.-L. Huang, “High-density localization of active molecules using Structured Sparse Model and Bayesian Information Criterion,” Opt. Express19(18), 16963–16974 (2011). [CrossRef] [PubMed]
  13. H. P. Babcock, Y. M. Sigal, and X. Zhuang, “A high-density 3D localization algorithm for stochastic optical reconstruction microscopy,” Optical Nanoscopy1(1), 6–10 (2012). [CrossRef]
  14. E. A. Mukamel, H. P. Babcock, and X. Zhuang, “Statistical Deconvolution for Superresolution Fluorescence Microscopy,” Biophys. J.102(10), 2391–2400 (2012). [CrossRef] [PubMed]
  15. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods9(7), 721–723 (2012). [CrossRef] [PubMed]
  16. M. R. Osborne, B. Presnell, and B. A. Turlach, “A new approach to variable selection in least squares problems,” IMA J. Numer. Anal.20(3), 389–403 (2000). [CrossRef]
  17. D. M. Malioutov, M. Cetin, and A. S. Willsky, “Homotopy continuation for sparse signal representation,” ICASSP5, 733–736 (2005).
  18. D. L. Donoho and Y. Tsaig, “Fast Solution of the L1-norm Minimization Problem When the Solution May be Sparse,” IEEE Trans. Inf. Theory54(11), 4789–4812 (2008). [CrossRef]
  19. A. Y. Yang and S. S. Sastry, “Fast l1-minimization algorithms and an application in robust face recognition: a review,” Proceedings of 2010 IEEE 17th International Conference on Image Processing, 1849–1852 (2010). [CrossRef]
  20. A. Y. Yang, Z. Zhou, A. Ganesh, S. S. Shankar, and Y. Ma, “Fast L1-Minimization Algorithms For Robust Face Recognition,” IEEE Trans. Image Process.22(8), 3234–3246 (2012).
  21. E. van den Berg and M. P. Friedlander, “Probing the Pareto frontier for basis pursuit solutions,” SIAM J. Sci. Comput.31(2), 890–912 (2009). [CrossRef]
  22. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences2(1), 183–202 (2009). [CrossRef]
  23. M. C. Grant and S. P. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.0 beta,” http://cvxr.com/cvx (2013).
  24. M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science317(5845), 1749–1753 (2007). [CrossRef] [PubMed]
  25. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science319(5864), 810–813 (2008). [CrossRef] [PubMed]
  26. M. Grant, S. Boyd, and Y. Ye, “CVX: Matlab Software for Disciplined Convex Programming, Version 1.0 beta 3,” Recent Advances in Learning and Control}, 95-110 (2006).
  27. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods7(5), 377–381 (2010). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited