OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 20, Iss. 22 — Oct. 22, 2012
  • pp: 24585–24599
« Show journal navigation

Flexible structured illumination microscope with a programmable illumination array

Pavel Křížek, Ivan Raška, and Guy M. Hagen  »View Author Affiliations


Optics Express, Vol. 20, Issue 22, pp. 24585-24599 (2012)
http://dx.doi.org/10.1364/OE.20.024585


View Full Text Article

Acrobat PDF (1950 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Structured illumination microscopy (SIM) has grown into a family of methods which achieve optical sectioning, resolution beyond the Abbe limit, or a combination of both effects in optical microscopy. SIM techniques rely on illumination of a sample with patterns of light which must be shifted between each acquired image. The patterns are typically created with physical gratings or masks, and the final optically sectioned or high resolution image is obtained computationally after data acquisition. We used a flexible, high speed ferroelectric liquid crystal microdisplay for definition of the illumination pattern coupled with widefield detection. Focusing on optical sectioning, we developed a unique and highly accurate calibration approach which allowed us to determine a mathematical model describing the mapping of the illumination pattern from the microdisplay to the camera sensor. This is important for higher performance image processing methods such as scaled subtraction of the out of focus light, which require knowledge of the illumination pattern position in the acquired data. We evaluated the signal to noise ratio and the sectioning ability of the reconstructed images for several data processing methods and illumination patterns with a wide range of spatial frequencies. We present our results on a thin fluorescent layer sample and also on biological samples, where we achieved thinner optical sections than either confocal laser scanning or spinning disk microscopes.

© 2012 OSA

1. Introduction

Structured illumination microscopy (SIM) works by acquiring a set of images at a given focal plane using widefield detection where each image in the set is made with a different position of an illumination mask but with no mask in the detection path [1

1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

]. Subsequent image processing is always needed to yield an optically sectioned image [1

1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

3

3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef] [PubMed]

] or an image with resolution beyond the Abbe limit [4

4. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef] [PubMed]

, 5

5. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]

].

We focused on improvements to the “SIM for optical sectioning” application. The most familiar implementation of this technique was introduced in 1997 by Neil et al. [3

3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef] [PubMed]

]. Their method works by projecting a line illumination pattern onto a sample, followed by acquisition of a set of three widefield images with the pattern shifted by relative spatial phases 0, 2π/3, and 4π/3. An optically sectioned image can be recovered computationally as
Ic=[(I1I2)2+(I1I3)2+(I2I3)2]1/2,
(1)
where Ic is an optically sectioned image, and I1, I2 and I3 are the three images acquired with different pattern positions. Microscopes according to this definition have been used rather extensively but with limited flexibility in tuning the optical section thickness, the choice of illumination patterns, or the data processing method.

In the quest for higher imaging rates and increased flexibility, investigators have turned to spatial light modulators (SLMs) for pattern creation. SIM for sectioning systems employing widefield detection have used digital micromirror devices (DMDs) [7

7. T. Fukano and A. Miyawaki, “Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples,” Appl. Opt. 42(19), 4119–4124 (2003). [CrossRef] [PubMed]

9

9. D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003). [CrossRef] [PubMed]

], or transmissive liquid crystal SLMs [10

10. S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006). [CrossRef]

].

To increase both the flexibility and the optical sectioning performance of structured illumination microscopy, we used a reflective ferroelectric liquid crystal-on-silicon (LCOS) microdisplay to create the illumination pattern. Use of the microdisplay allows us to utilize a truly arbitrary pattern for structured illumination, including arrays of lines, dots, or random patterns, and thus, to find the most suitable scanning pattern for a given sample. This flexibility allows us to easily compromise between scanning speed (i.e., the number of patterns), the desired signal to noise ratio (SNR), and the optical section thickness. Similar LCOS microdisplays have been used previously in SIM [11

11. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009). [CrossRef] [PubMed]

, 12

12. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011). [CrossRef] [PubMed]

], and in programmable array microscopy (PAM) [13

13. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009). [CrossRef] [PubMed]

, 14

14. G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

].

We present a unique, highly accurate calibration procedure that allows us to determine a one-to-one mapping between pixels of the microdisplay used to create the illumination pattern and pixels of the camera chip. In this way we can recreate a digital illumination mask in the acquired data. Knowledge of the exact position of the illumination pattern in each camera image allowed us to apply higher performance data processing methods for image reconstruction, i.e., scaled subtraction of the out of focus light [1

1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

, 15

15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006). [CrossRef] [PubMed]

].

In the context of “SIM for optical sectioning” systems with no mask in the detection path, the scaled subtraction approach has previously only been suggested as a possible processing method [15

15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006). [CrossRef] [PubMed]

]. We evaluated its 3D imaging performance compared to two other more simplistic techniques, one applying a maximum minus minimum projection approach [15

15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006). [CrossRef] [PubMed]

], and one applying homodyne detection [3

3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef] [PubMed]

, 15

15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006). [CrossRef] [PubMed]

]. Scaled subtraction allowed us to obtain better results (i.e., better suppression of out of focus signals with higher SNR) compared to methods which process the data without any information about which part of the sample was illuminated.

The properties of our LCOS-based SIM system are demonstrated using a thin fluorescent layer sample and with biological samples. For illumination we used line grid patterns with a wide range of spatial frequencies. We also compared the results with a Leica SP5 CLSM and an Andor Revolution spinning disk system.

2. Methods

2.1 Microscope setup

Our setup is shown in Fig. 1(a)
Fig. 1 Structured illumination microscope: a) the microscope setup, b) principle of LCOS microdisplay operation.
. We used an IX71 microscope equipped with 100 × /1.45 NA and 60 × /1.35 NA objectives (both oil immersion, Olympus, Hamburg, Germany). We used two detectors: a conventional CCD camera (Clara) and an EMCCD (Ixon 885, both from Andor, Belfast, Northern Ireland). For illumination, we used a 532 nm solid state laser (1000 mW, Dragon laser, ChangChun, China). The laser was introduced into the microscope using a 0.39 NA multimode optical fiber and a 1 inch, 75 mm focal length achromatic lens for collimation (Thor Labs, Newton, New Jersey). To scramble the coherence of the laser and reduce speckle, we used a laser speckle reducer (Optotune, Dietikon, Switzerland) based on an electroactive polymer. Fluorescence was isolated using an appropriate filter set for Cy3 (Chroma, Bellows Falls, Vermont).

The microscope's illumination tube lens and objective collect the light from pixels in the ON state and image the microdisplay onto the sample, see Fig. 1(a). For the illumination tube lens we have chosen a 150 mm focal length lens (Thor Labs) as it images the microdisplay so that it just fills the field of view of the microscope. Because Olympus objectives are designed to use tube lenses with a focal length of 180 mm, the effective demagnification of the microdisplay into the sample is a factor of (150/180) × MAG, where MAG is the magnification of the objective. Using a 100 × /1.45 NA objective, a single 13.6 × 13.6 μm microdisplay pixel will be imaged into the sample with a nominal size of 163 × 163 nm.

At 532 nm, the Abbe limit for our 1.45 NA objective is λ/2NA = 183.4 nm, larger than the 163 nm microdisplay pixel size we used. Nyquist-limited sampling of the specimen by the SIM pattern would imply an optimal microdisplay pixel size of < λ/4NA = 91.7 nm as imaged in the sample. However, we did not observe obvious patterned artifacts in the reconstructed images and so judged that a pixel size of 163 nm was adequate. However, these relationships can easily be changed by choosing a different illumination tube lens and objective. A separate matter is the CCD pixel size. With a 100 × objective the camera pixel size was 80 nm in the sample, implying Nyquist-limited imaging at the fluorescence wavelength (~550 nm).

2.2 LCOS microdisplay operation

The ferroelectric LCOS microdisplay (type 3DM, Forth Dimension Displays, Dalgety Bay, Scotland) used in our setup offers several characteristics advantageous for structured illumination microscopy including high fill factor (93%), small pixels (13.6 × 13.6 μm), high contrast (> 1000:1 at f/3.2) and high speed (40 μs on/off switch time, ~3.2 kHz maximum pattern refresh rate).

This device functions as an addressable array of quarter-wave plates with a reflective backing. We used it as a programmable spatial light modulator in a binary imaging mode. Pixels that are in the ON state rotate the polarization of light by ~70 degrees after two passes through the liquid crystal material (manufacturer's specification for operation at room temperature). Pixels in the OFF state reflect the light without changing the state of polarization. If vertically polarized illumination light is reflected onto the display using a polarizing beam splitter (PBS) cube (Thor Labs) then after reflection off the display, only horizontally polarized light (corresponding to the pixels in the ON state) is transmitted though the PBS cube towards the microscope, see Fig. 1(b). This allows us to create any desired binary illumination pattern.

Microdisplays are typically used for video projection, meaning that grayscale (or full color) images must be produced. This is usually accomplished using a bitplane weighting approach [16

16. D. Armitage, I. Underwood, and S.-T. Wu, Introduction to Microdisplays (John Wiley and Sons, 2006), p. 377.

]. For our purposes, i.e., creating a binary mask, a drive sequence with equally weighted bitplanes is required. We chose the longest available bitplane duration, 300 μs.

The LCOS microdisplay requires that the state of every pixel is reversed after each image, i.e., after each 300 μs bitplane. During such compensation cycles, the light source must be switched off. We accomplished this by directly switching the lasers off using synchronizing signals derived from the 3DM microdisplay controller.

2.3 Illumination patterns

Most strategies in structured illumination microscopy assume that a set of illumination masks required for image reconstruction consist of N equal movements of the same pattern such that the sum of all of the masks results in homogenous illumination. Let us define the mark-to-area ratio (MAR) [1

1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

] of the pattern as the fraction of pixels which are considered to be illuminated in a unit area of the pattern.

The illumination masks used in our experiments consisted of line grid patterns. Lines were t microdisplay pixels thick (“on pixels”) with a gap of N – t microdisplay pixels (“off” pixels) in between. The line grid was shifted by one pixel between each frame to obtain a new illumination mask. The mark-to-area ratio corresponds to MAR = t / N.

The illumination masks can also be created from dots in a square or hexagonal grid [15

15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006). [CrossRef] [PubMed]

], or using randomized illumination patterns [17

17. R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001). [CrossRef] [PubMed]

, 18

18. C. Ventalon and J. Mertz, “Dynamic speckle illumination microscopy with translated versus randomized speckle patterns,” Opt. Express 14(16), 7198–7209 (2006). [CrossRef] [PubMed]

]. However, we do not consider these matters here.

2.4 Optical sectioning from structured illumination data

Several computational approaches for obtaining optically sectioned images from structured illumination data are reviewed in [1

1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

]. Essentially, there are two approaches for data processing. The first type reconstructs optically sectioned images without any information except for the number of illumination patterns N. The second type requires, in addition, knowledge of the exact position of the illumination mask in the camera image.

Let ICi represent intensity values of the computed sectioned image, In are intensity values of the camera image captured at a given frame n in the sequence of N illumination patterns, and (x,y) indicates thex,ypixel position in the camera image. A widefield image can be recovered from SIM data as an average of all images:

IWF(x,y)=1Nn=1NIn(x,y).
(2)

The following two more simplistic methods reconstruct optical sections from SIM data as follows:

IC1(x,y)=maxn=1,,NIn(x,y)minn=1,,NIn(x,y),
(3)
IC2(x,y)=|n=1NIn(x,y)exp(2πinN)|.
(4)

The approach described by Eq. (3) applies maximum minus minimum projection [15

15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006). [CrossRef] [PubMed]

]. Here it is assumed that the “max” term contains mainly contributions from parts of the sample that are in focus and the “min” term mainly contributions from out of focus regions. The method in Eq. (4) is a form of homodyne detection [3

3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef] [PubMed]

], which is a technique based on detecting frequency-modulated signals by interference with a reference signal.

2.5 One-to-one correspondence mapping between microdisplay and camera

The illumination pattern position in the camera image might be determined by analyzing the raw images. In our hands this proved both difficult and inaccurate, particularly with sparse samples. In the following two sections we introduce a procedure that allows us to determine a mathematical model describing the one-to-one mapping between the microdisplay and the camera sensor and thus to create a digital illumination mask in the camera image. Having such a model allows one to use arbitrary illumination patterns, to determine the exact pattern position in the camera image even with sparse samples, and to correct for distortions of the illumination pattern in the acquired data.

The illumination patterns created on the microdisplay are projected to the camera chip as follows. An optical ray originating at a point in the plane of the microdisplay passes through an illumination tube lens, the microscope objective, and illuminates the sample. Fluorescence from the sample is collected by the objective, and imaged by the microscope at the point on the camera sensor. A block diagram is shown in Fig. 2
Fig. 2 Principle of mapping the illumination pattern from the microdisplay to a camera sensor. The rotation of the microdisplay and barrel distortions in the camera image are greatly exaggerated for clarity. Point x in the microdisplay is detected at point u^ in the camera. We wish to know the position of any arbitrary microdisplay point in the camera image. To do so, we must determine the projective matrix H, see Eq. (6), and the coefficients of the polynomial modeling the geometric distortions, see Eq. (7).
.

Let us assume for the moment that there are no distortions in the path of the optical ray, i.e., , and let us express the position of points and in projective (also called homogeneous) coordinates: , . Using projective coordinates allows us to describe affine transformations (e.g., translation, rotation, scaling, reflection) by a single matrix multiplication. It can be shown [19

19. M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.

] that any point (or pixel) from the microdisplay plane can be unambiguously mapped to the camera sensor (and vise versa) using a linear projective transformation (also called homography)
αu˜=Hx˜,
(6)
where H is a constant projective matrix, and is an appropriate scaling factor. The projective matrix H is unique for a given setup and has to be determined, see Section 2.6.

Unfortunately, the illumination pattern created on the microdisplay is slightly distorted when it is imaged on the camera. Therefore, we correct the mapping in Eq. (6) for two distortion components. First, radial distortion (i.e., barrel or pincushion distortion) bends the optical ray from its ideal position and second, decentering displaces the principal point u^p from the optical axis. Radial distortion is usually modeled by an even power polynomial. The corrected image coordinates u=[u,v] are obtained by one-to-one mapping as
u=u^+(u^u^p)k=1Kρkr2k,
(7)
where u^=[u^,v^] are the measured uncorrected image coordinates, u^p=[u^p,v^p] is the measured position of the principal point, r=u^u^p2 is the radial distance from the principal point, 2 stands for the Euclidean norm, and are coefficients of the polynomial modeling the radial distortions, where typically . The location of the principal point u^p and the coefficients are unknown and have to be determined, see Section 2.6. Further details about this topic, including more complex lens distortion models, can be found in references [19

19. M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.

21

21. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]

].

2.6 Camera-microdisplay calibration

Camera-microdisplay calibration is a procedure that allows us to determine numerical values of the projective matrix H in Eq. (6), and the distortion coefficients and the location of the principal point u^p in Eq. (7). Calibration proceeds by finding corresponding pairs of points between the microdisplay and the camera image. Each such correspondence provides one Eq. (6) and one Eq. (7). This results in a system of equations, see the Appendix, which can be solved by least squares methods. Typically hundreds of points are used to correct for uncertainty in the measurements.

As the microdisplay lets us to create an arbitrarily configured “known scene”, we established the calibration using a chessboard pattern with a box size of 8 × 8 microdisplay pixels, see Fig. 3(a)
Fig. 3 Chessboard calibration image with four orientation markers defining the coordinate system: a) microdisplay image with known positions of corners and markers and b) the raw camera image (100 × /1.45 NA objective; camera is rotated with respect to the display) of the same area with detected corners and markers. Correspondences are indicated by numbers.
. Four orientation markers were placed in the chessboard center to define the coordinate system of the microdisplay. The chessboard illumination pattern was projected to the camera sensor using a thin fluorescent film sample. The corresponding camera image is shown in Fig. 3(b). As reference points, we used the known positions of the corners on the microdisplay. Corners in the camera image were detected automatically with subpixel precision using a corner detector described by Noble in [22

22. J. A. Noble, “Descriptions of image surfaces,” (University of Oxford, Oxford, 1989).

].

The final mapping between the microdisplay and the camera was estimated from about 3600 corresponding corner points. We found that, when using a 100 × objective, the residual error of mapping any arbitrary point from the microdisplay to the camera image using the lens model corrected for radial distortion and decentering components, cf. Equations (6) and (7), is 0.12 ± 0.08 pixels (~10 ± 6 nm referenced to the pattern position in the sample). When using a simpler lens model without distortion correction, the residual error of mapping an arbitrary point was 0.20 ± 0.13 pixels (~16 ± 10 nm referenced to the pattern position in the sample). This represents a barrel distortion of about 0.09% for the full camera image. The accuracy of mapping points for each lens model was determined by measuring point-to-point distances of the corners detected in the camera image and the corresponding corners mapped from the microdisplay into the camera image.

2.7 Data acquisition and processing

We acquired image sequences using Andor IQ software, which was used together with an input/output computer card (PCIM-DDA06/16, Measurement Computing, Massachusetts) to move a Z stage (NanoScan Z100, Prior Scientific, Cambridge, UK).

All data processing was performed offline in Matlab (The Mathworks, Natick, Massachusetts). Image intensities of the raw data were first scaled into the interval [0, 1] based on the camera acquisition bit depth.

To use scaled subtraction method, cf. Equation (5), we first smoothed the digital mask using a Gaussian filter with a sigma that approximates the measured point spread function (PSF) of the microscope.

We sometimes noticed slight patterned artifacts in the reconstructed images, which we attributed to minor fluctuations in the intensity of the laser. To correct this, we normalized each image of a sequence (used for reconstruction of one optical section) such that the average intensity of all images in this sequence is the same; this procedure was suggested by Cole et al. [23

23. M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001). [CrossRef] [PubMed]

] and gave satisfactory results for both florescent planes and biological samples.

3. Results

3.1 Tunable optical sectioning ability

We first determined the optical sectioning ability of the LCOS microdisplay-based structured illumination microscope. This was done by focusing through a thin fluorescent layer sample while scanning with various illumination patterns. The thin fluorescent layer sample was prepared by spreading 1 μl of 40 nm orange fluorescent beads on a coverslip and allowing them to dry. The sample was then mounted in moviol and sealed with clear nail polish. Sectioned images were computed for each set of illumination patterns using the Max-Min approach, cf. Equation (3), homodyne detection, cf. Equation (4), and scaled subtraction, cf. Equation (5). The average intensity of each reconstructed image was determined as a function of axial position of the sample. The resulting peak-shaped curves with their maxima in the focal plane, see example data in Fig. 4(a)
Fig. 4 Example of SIM data used to determine optical sectioning parameters of the system. The illumination pattern was a line grid with MAR = 1/7 (line spacing 981 nm) and line thickness of one microdisplay pixel (diffraction limited in the sample plane). Data were acquired using a 100 × /1.45 NA oil immersion objective, 532 nm laser light source, EMCCD camera, and a Z-increment of 50 nm. Shown are a) processed data and b) data after normalization with the fitted bimodal Gaussian functions.
, were fitted to a bimodal Gaussian function plus a constant offset using non-linear least squares methods and normalized such that the maximum of each curve was set to one [24

24. Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999). [CrossRef] [PubMed]

, 25

25. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998). [CrossRef]

], see example data in Fig. 4(b). From this data we computed the full width at half maximum (FWHM), which corresponds to the optical sectioning thickness and offset. The offset originates from a combination of cross-talk between line pattern “slits” and the effect of noise [1

1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

], and is expressed as a percentage of the maximum intensity response. The sectioning data in Fig. 4 are slightly asymmetric, this is usually attributed to spherical aberrations. For these experiments, we used a 532 nm laser, 100 × /1.45 NA oil immersion objective and a Z-increment of 50 nm.

The tunable optical sectioning ability of the SIM system is shown in Fig. 5
Fig. 5 Tunable optical sectioning of the LCOS-based structured illumination microscope for the three examined data processing methods, cf. Equations (3)(5). The top row shows measured FWHM vs. MAR and the bottom row measured offset vs. MAR. The illumination pattern was a line grid with MAR ∈ [0.05, 0.9] and line thickness of one to six microdisplay pixels (163 nm – 1.08 μm in the sample plane). Data were acquired using a 100 × /1.45 NA oil immersion objective, 532 nm laser, EMCCD camera, and a Z-increment of 50 nm. The horizontal dashed lines indicate the measured values for a CLSM with its pinhole set to 1 AU (FWHM = 966 nm, offset = 0.05). and for a spinning disk system (FWHM = 1.632 μm, offset = 0.11).
, where we have plotted the fitted FWHM vs. MAR and offset vs. MAR and line thickness of the illumination pattern for the three processing methods. The line thickness is indicated by color. We can observe that the sectioning strength improves (lower FWHM) as the MAR of the pattern increases but at the cost of increased offset (lower signal). A similar trend has also been observed in [2

2. T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242(2), 111–116 (2011). [CrossRef] [PubMed]

, 13

13. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009). [CrossRef] [PubMed]

, 24

24. Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999). [CrossRef] [PubMed]

]. However, the scaled subtraction method is effective in removing the offset which is present when using the other two methods. In practice, this allows SIM to achieve optical sectioning thicknesses well below those available in CLSM. The thinnest optical section recorded was 299 nm (MAR = 1/3, line spacing 489 nm, diffraction limited line thickness). The system can therefore approach nearly isotropic resolution in x, y and z. Our results are compatible with those of Neil, et al. [3

3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef] [PubMed]

], who determined that optimal optical sectioning would be achieved with a line pattern with a spacing of λ/NA (~380 nm at λ = 550 nm and NA 1.45). In Fig. 5, we do not show the measured values for patterns with a spacing below λ/NA (i.e., MAR = 1/2 for lines of one microdisplay pixel thick, corresponding to a line spacing of 326 nm) because of very low pattern contrast in these cases.

We also evaluated the optical sectioning ability of a CLSM (Leica SP5 with 63 × /1.4 NA oil immersion objective, 561 nm laser, 1 AU pinhole) and of a spinning disk microscope (Andor Revolution, 60 × /1.4 NA oil immersion objective, 561 nm laser) using the same fluorescent layer sample. For the CLSM, the measured FWHM was 966 nm and offset was 0.05. For the spinning disk system the measured FWHM was 1.632 μm and offset was 0.11. These values are plotted as dashed lines in Fig. 5.

3.2 Comparison of different processing methods and scanning patterns

To illustrate the possible tradeoffs between the optical sectioning ability of the three processing methods and the spatial frequency of the illumination patterns, we imaged a relatively thick biological sample (a fluorescent pollen grain about 50 μm thick, type 30-4264, Carolina Biological), see images in Fig. 6
Fig. 6 Comparison of scanning patterns and processing methods. Images of a pollen grain were acquired using a 60 × /1.35 NA oil objective, 532 nm laser, EMCCD camera and a Z-increment of 500 nm. The intensity profiles are along the vertical lines in the corresponding XY images.
. In order to compare the optical sectioning performance between the different processing methods and SIM illumination patterns, we estimated the signal to noise ratio (SNR, i.e., the ratio of the average signal to the standard deviation of the background) and the signal to background ratio (SBR, i.e., the ratio of the in focus foreground to the out of focus background) in the reconstructed images.

We calculated SNR and SBR as follows. The signal in the reconstructed image was segmented using an iterative threshold selection method based on a k-means algorithm [19

19. M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.

]. The background mask for SNR estimation was determined such that the in focus signal and the out of focus light was not included. To create the background mask far away from the sample, the signal mask was first morphologically dilated 15 times using a 3 × 3 structuring element and the result was inverted. The background mask for SBR estimation was established as the complement of the signal and noise masks derived for the SNR calculation in order to evaluate only the contribution from out of focus light.

Figure 6 shows a comparison of single optical sections in the XY plane, as well as XZ and YZ projections of the reconstructed data. In this experiment, we used a 60 × /1.35 NA oil immersion objective and a line grid illumination pattern with two spatial frequencies. The nominal line thickness was 272 nm at the sample plane with a line spacing of 1.63 μm (MAR = 1/6), or 8.16 μm (MAR = 1/30).

It has been predicted theoretically that sparse (low MAR) patterns improve the sectioning ability of a SIM system when imaging thick samples [1

1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

]. The YZ projections in Fig. 6 show that the low frequency illumination patterns indeed image the deepest parts of the sample better than the high frequency pattern. This difference is most remarkable when using scaled subtraction for reconstruction. However, we can also observe that the coarser pattern yields images that contain noticeably more out of focus signal than the fine pattern. We also found that the scaled subtraction method used together with a digital mask acquired via the calibration scheme described in Sections 2.5 and 2.6 has both the highest SNR and SBR of the three tested methods. The reconstructed images also show that illumination patterns with low spatial frequency produce higher SNR but thicker optical sections (i.e., lower SBR) whereas for high spatial frequencies we observe thinner optical sections but lower SNR.

3.3 Comparing different optically sectioning microscopes

Finally, we compared the LCOS-based SIM system to more established optical sectioning microscopes. We imaged a similar pollen grain under conditions as close as we could achieve with the equipment available. Images in Fig. 7
Fig. 7 Comparison of different optically sectioning microscopes. The first row shows maximum intensity projections of the acquired images, the second row one optical section, and the third row the intensity profile along the indicated yellow lines. The acquisition parameters were as follows: CLSM − Leica SP5, 63 × /1.4 NA objective, 561 nm laser, XY pixel size 50 nm, pinhole 1AU; Spinning disk − Andor Revolution, Olympus 60 × /1.4 NA objective, Andor Ixon Ultra EMCCD camera, 561 nm laser, XY pixel size 222 nm; SIM − LCOS-based structured illumination, Olympus 60 × /1.35 NA objective, Andor Clara CCD camera, 532 nm laser, XY pixel size 107.5 nm. The Z-increment in all cases was 500 nm. The scanning pattern for SIM was a line grid with MAR = 1/16 and line thickness 272 nm in the sample plane. The SIM images were reconstructed using the scaled subtraction method, cf. Equation (5). The widefield image was computed from the SIM data by averaging the raw images, cf. Equation (2). For comparison, the images were resampled such that they all have a pixel size of 107.5 nm.
show a comparison of widefield, CLSM, spinning disk, and microdisplay-based structured illumination microscopes. The illumination pattern for SIM was a line grid with MAR = 1/16 and a line thickness of 272 nm in the sample plane (60 × /1.35 NA oil immersion objective). We used scaled subtraction, cf. Equation (5), for processing the SIM data. The widefield image was computed from the SIM data using Eq. (2).

We can observe from the intensity profiles that the SIM system outperforms both the CLSM and spinning disk microscopes in terms of rejection of out of plane fluorescence. Similar results have been observed before for spinning disk microscopes [26

26. R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006). [CrossRef] [PubMed]

], which do not reject out of focus signals as well as the SIM or CLSM systems. However, as the SIM system has to acquire several images to reconstruct a single optically sectioned image, image acquisition is slower than in spinning disk microscopes, which integrate all the pinhole positions in a single camera exposure. This limits the usefulness of the present system for live cell imaging, where rapidly moving structures would result in unwanted artifacts in the reconstructed images. However, our system should be well suited to high resolution scanning of fixed specimens.

4. Discussion

We used the microdisplay to illuminate the sample by directly imaging the display onto the sample plane rather than forming a fringe pattern based on laser interference as is usually done in SIM to achieve resolution beyond the Abbe limit [11

11. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009). [CrossRef] [PubMed]

, 12

12. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011). [CrossRef] [PubMed]

, 29

29. L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009). [CrossRef] [PubMed]

]. Because of this, any arbitrary pattern or binary image can be imaged onto the sample with very high fidelity. This is useful for applications such as fluorescence recovery after photobleaching (FRAP), where arbitrary shapes can be used for bleaching [13

13. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009). [CrossRef] [PubMed]

]. However, patterns with very high spatial frequencies (approaching the limit defined by the NA of the objective) suffer from poor contrast according to the contrast transfer function of the microscope [2

2. T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242(2), 111–116 (2011). [CrossRef] [PubMed]

], resulting in reconstructed images with low SNR. This is not the case in SIM with coherent illumination, as is usually used when enhancing lateral resolution.

Using the microdisplay in the image plane as an arbitrary binary mask also means that we utilize illumination light very inefficiently, but this is not a fundamental problem given a bright enough light source (i.e., a laser with adequate power).

5. Conclusion

Appendix: Camera-microdisplay calibration

For camera-microdisplay calibration we need to determine numerical values of the projective matrix H in Eq. (6), as well as the distortion coefficients and the location of the principal point u^p in Eq. (7). To do this we have adapted a method from machine vision applications [19

19. M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.

]. The goal is to find a set of corresponding points between the microdisplay and the camera image. One such a correspondence is shown in Fig. 2.

To estimate the projective matrix H, we can rewrite Eq. (6) as:
[αuαvα]=[h11h12h13h21h22h23h31h32h33][xy1],
(8)
which can be further expanded and rearranged into the form:
[xy1000uxuyu000xy1vxvyv][h11h12h33]=0.
(9)
Here one pair of corresponding points generates two rows in the left-hand matrix as indicated. To generate a solution, at least four correspondences between the microdisplay and the camera image are required. Typically many more points are used to correct for uncertainty in the measurements. We solved the over-determined system of linear equations using the singular value decomposition (SVD) method.

The computation of the projective matrix H has to be coupled with estimation of parameters of the lens distortion model in Eq. (7). This is done by minimizing the sum of squared distances between points mapped by Eqs. (6) and (7) with respect to the unknown parameters and u^p.

Appendix: Effect of Camera-microdisplay calibration in the case of sparse samples

Acknowledgments

This work was supported by Grant Agency of the Czech Republic projects 304/09/1047, P205/12/P392, and P302/12/G157 and by the projects Prvouk/1LF/1 and UNCE 204022 from the Charles University.

References and links

1.

R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.

2.

T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242(2), 111–116 (2011). [CrossRef] [PubMed]

3.

M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef] [PubMed]

4.

M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef] [PubMed]

5.

R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]

6.

F. Chasles, B. Dubertret, and A. C. Boccara, “Optimization and characterization of a structured illumination microscope,” Opt. Express 15(24), 16130–16140 (2007). [CrossRef] [PubMed]

7.

T. Fukano and A. Miyawaki, “Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples,” Appl. Opt. 42(19), 4119–4124 (2003). [CrossRef] [PubMed]

8.

T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct. 32(1), 9–15 (2007). [CrossRef] [PubMed]

9.

D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods 30(1), 16–27 (2003). [CrossRef] [PubMed]

10.

S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt. 8(7), S461–S466 (2006). [CrossRef]

11.

P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009). [CrossRef] [PubMed]

12.

L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011). [CrossRef] [PubMed]

13.

G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72, 431–440 (2009). [CrossRef] [PubMed]

14.

G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE 6441, 64410S (2007).

15.

R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45(20), 5037–5045 (2006). [CrossRef] [PubMed]

16.

D. Armitage, I. Underwood, and S.-T. Wu, Introduction to Microdisplays (John Wiley and Sons, 2006), p. 377.

17.

R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc. 204(2), 119–135 (2001). [CrossRef] [PubMed]

18.

C. Ventalon and J. Mertz, “Dynamic speckle illumination microscopy with translated versus randomized speckle patterns,” Opt. Express 14(16), 7198–7209 (2006). [CrossRef] [PubMed]

19.

M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.

20.

D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32, 444–462 (1966).

21.

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]

22.

J. A. Noble, “Descriptions of image surfaces,” (University of Oxford, Oxford, 1989).

23.

M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc. 203(3), 246–257 (2001). [CrossRef] [PubMed]

24.

Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc. 196(3), 317–331 (1999). [CrossRef] [PubMed]

25.

P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc. 189(3), 192–198 (1998). [CrossRef]

26.

R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt. 11(6), 064011 (2006). [CrossRef] [PubMed]

27.

P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE 7932(1), 79320G (2011). [CrossRef]

28.

P. Křížek and G. M. Hagen, “Spatial light modulators in fluorescence microscopy,” in Microscopy: Science, Technology, Applications and Education 4th ed., A. Méndez-Vilas and J. Díaz, eds. (Formatex, 2010), pp. 1366–1377.

29.

L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009). [CrossRef] [PubMed]

30.

T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011). [CrossRef] [PubMed]

OCIS Codes
(100.3010) Image processing : Image reconstruction techniques
(180.1790) Microscopy : Confocal microscopy
(180.2520) Microscopy : Fluorescence microscopy
(230.6120) Optical devices : Spatial light modulators
(150.1488) Machine vision : Calibration

ToC Category:
Microscopy

History
Original Manuscript: August 27, 2012
Revised Manuscript: September 26, 2012
Manuscript Accepted: September 28, 2012
Published: October 12, 2012

Citation
Pavel Křížek, Ivan Raška, and Guy M. Hagen, "Flexible structured illumination microscope with a programmable illumination array," Opt. Express 20, 24585-24599 (2012)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-22-24585


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy 3rd ed., J. B. Pawley, ed. (Springer Science + Business Media, 2006), pp. 265–279.
  2. T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc.242(2), 111–116 (2011). [CrossRef] [PubMed]
  3. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett.22(24), 1905–1907 (1997). [CrossRef] [PubMed]
  4. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc.198(2), 82–87 (2000). [CrossRef] [PubMed]
  5. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE3568, 185–196 (1999). [CrossRef]
  6. F. Chasles, B. Dubertret, and A. C. Boccara, “Optimization and characterization of a structured illumination microscope,” Opt. Express15(24), 16130–16140 (2007). [CrossRef] [PubMed]
  7. T. Fukano and A. Miyawaki, “Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples,” Appl. Opt.42(19), 4119–4124 (2003). [CrossRef] [PubMed]
  8. T. Fukano, A. Sawano, Y. Ohba, M. Matsuda, and A. Miyawaki, “Differential Ras activation between caveolae/raft and non-raft microdomains,” Cell Struct. Funct.32(1), 9–15 (2007). [CrossRef] [PubMed]
  9. D. M. Rector, D. M. Ranken, and J. S. George, “High-performance confocal system for microscopic or endoscopic applications,” Methods30(1), 16–27 (2003). [CrossRef] [PubMed]
  10. S. Monneret, M. Rauzi, and P. F. Lenne, “Highly flexible whole-field sectioning microscope with liquid-crystal light modulator,” J. Opt. A, Pure Appl. Opt.8(7), S461–S466 (2006). [CrossRef]
  11. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods6(5), 339–342 (2009). [CrossRef] [PubMed]
  12. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods8(12), 1044–1046 (2011). [CrossRef] [PubMed]
  13. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. deVries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech.72, 431–440 (2009). [CrossRef] [PubMed]
  14. G. M. Hagen, W. Caarls, M. Thomas, A. Hill, K. A. Lidke, B. Rieger, C. Fritsch, B. van Geest, T. M. Jovin, and D. J. Arndt-Jovin, “Biological applications of an LCoS-based programmable array microscope,” Proc. SPIE6441, 64410S (2007).
  15. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt.45(20), 5037–5045 (2006). [CrossRef] [PubMed]
  16. D. Armitage, I. Underwood, and S.-T. Wu, Introduction to Microdisplays (John Wiley and Sons, 2006), p. 377.
  17. R. Heintzmann, Q. S. Hanley, D. Arndt-Jovin, and T. M. Jovin, “A dual path programmable array microscope (PAM): simultaneous acquisition of conjugate and non-conjugate images,” J. Microsc.204(2), 119–135 (2001). [CrossRef] [PubMed]
  18. C. Ventalon and J. Mertz, “Dynamic speckle illumination microscopy with translated versus randomized speckle patterns,” Opt. Express14(16), 7198–7209 (2006). [CrossRef] [PubMed]
  19. M. Šonka, V. Hlaváč, and R. Boyle, Image Processing Analysis and Machine Vision 2nd ed. (PWS Publishing, 1998), p. 770.
  20. D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng.32, 444–462 (1966).
  21. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell.14(10), 965–980 (1992). [CrossRef]
  22. J. A. Noble, “Descriptions of image surfaces,” (University of Oxford, Oxford, 1989).
  23. M. J. Cole, J. Siegel, S. E. D. Webb, R. Jones, K. Dowling, M. J. Dayel, D. Parsons-Karavassilis, P. M. W. French, M. J. Lever, L. O. D. Sucharov, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Time-domain whole-field fluorescence lifetime imaging with optical sectioning,” J. Microsc.203(3), 246–257 (2001). [CrossRef] [PubMed]
  24. Q. S. Hanley, P. J. Verveer, M. J. Gemkow, D. J. Arndt-Jovin, and T. M. Jovin, “An optical sectioning programmable array microscope implemented with a digital micromirror device,” J. Microsc.196(3), 317–331 (1999). [CrossRef] [PubMed]
  25. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. vanVliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (PAM),” J. Microsc.189(3), 192–198 (1998). [CrossRef]
  26. R. Wolleschensky, B. Zimmermann, and M. Kempe, “High-speed confocal fluorescence imaging with a novel line scanning microscope,” J. Biomed. Opt.11(6), 064011 (2006). [CrossRef] [PubMed]
  27. P. A. A. DeBeule, A. H. B. deVries, D. J. Arndt-Jovin, and T. M. Jovin, “Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD),” Proc. SPIE7932(1), 79320G (2011). [CrossRef]
  28. P. Křížek and G. M. Hagen, “Spatial light modulators in fluorescence microscopy,” in Microscopy: Science, Technology, Applications and Education 4th ed., A. Méndez-Vilas and J. Díaz, eds. (Formatex, 2010), pp. 1366–1377.
  29. L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J.38(6), 807–812 (2009). [CrossRef] [PubMed]
  30. T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods8(5), 417–423 (2011). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited