OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 22, Iss. 2 — Jan. 27, 2014
  • pp: 1697–1712
« Show journal navigation

Bispectral coding: compressive and high-quality acquisition of fluorescence and reflectance

Jinli Suo, Liheng Bian, Feng Chen, and Qionghai Dai  »View Author Affiliations


Optics Express, Vol. 22, Issue 2, pp. 1697-1712 (2014)
http://dx.doi.org/10.1364/OE.22.001697


View Full Text Article

Acrobat PDF (1258 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Fluorescence widely coexists with reflectance in the real world, and an accurate representation of these two components in a scene is vitally important. Despite the rich knowledge of fluorescence mechanisms and behaviors, traditional fluorescence imaging approaches are quite limited in efficiency and quality. To address these two shortcomings, we propose a bispectral coding scheme to capture fluorescence and reflectance: multiplexing code is applied to excitation spectrums to raise the signal-to-noise ratio, and compressive sampling code is applied to emission spectrums for high efficiency. For computational reconstruction from the sparse coded measurements, the redundancy in both components promises recovery from sparse measurements, and the difference between their redundancies promises accurate separation. Mathematically, we cast the reconstruction as a joint optimization, whose solution can be derived by the Augmented Lagrange Method. In our experiment, results on both synthetic data and real data captured by our prototype validate the proposed approach, and we also demonstrate its advantages in two computer vision tasks—photorealistic relighting and segmentation.

© 2014 Optical Society of America

1. Introduction

With a focus on efficient and high-quality capturing of fluorescent and reflective components in a whole scene, this paper explores the intrinsic redundancy within reflectance and fluorescence and proposes an approach reconstructing them from mixed compressive measurements. The fluorescence component usually covers a wide spectrum, and the Kasha–Vavilov rule [3

3. A. D. McNaught and A. Wilkinson, Compendium of Chemical Terminology (Blackwell Science, 1997).

] states that the spectral distributions of emitted fluorescence from the same material under different monochromatic lights remain unchanged up to a certain scale. In other words, if we represent the excitation–emission values of each point as a matrix, the matrix rows are the same except for some scaling factors. Therefore we formulate the fluorescence as a low-rank matrix, and this facilitates the reconstruction from compressive samples. On the other hand, the reflectance does not change the input illuminations; thus it is quite sparse in the excitation–emission space and has some overlap [4

4. A. Springsteen, “Introduction to measurement of color of fluorescent materials,” Anal. Chim. Acta 380(2), 183–192 (1999). [CrossRef]

] with the fluorescent component due to the Stokes shift. Based on the above analysis, we cast the reconstruction as a joint optimization of the nuclear norm of the fluorescence component and the l1 norm of the reflective component. Mathematically, we resort to convex optimization for the solution.

Fig. 1 The system and results of our approach on one exemplary scene. (a) Prototype setup. (b) One coded image. (c, d) Reconstructed reflectance and fluorescence, respectively.

In summary, the proposed approach contributes mainly in the following ways:
  • Explore the redundancy in reflectance and fluorescence and propose an efficient acquisition and computational reconstruction approach for both components.
  • Formulate the reconstruction of two components from sparse multiplexed measurements as joint optimization, which is solved as derived in later sections.
  • Build a setup for effective capturing of reflectance and fluorescence in real scenes at high spectral resolution.

2. Related work

2.1. Fluorescence

2.2. Compressive spectrum imaging

The redundancy of visual information is widely known and exploited to build next-generation spectrum imaging systems, which usually include randomly coded image acquisition followed by sparse reconstruction. Horisaki et al. [16

16. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef] [PubMed]

, 17

17. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef] [PubMed]

] propose to apply compressive sensing in spectral imaging under a multiplexing framework [18

18. R. Horisaki, X. Xiao, J. Tanida, and B. Javidi, “Feasibility study for compressive multi-dimensional integral imaging,” Opt. Express 21(4), 4263–4279 (2013). [CrossRef] [PubMed]

, 19

19. R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18(22), 23041–23053 (2010). [CrossRef] [PubMed]

], and some representative works include [16

16. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef] [PubMed]

, 17

17. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef] [PubMed]

] and [20

20. Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, “Development of a digital-micromirror-device-based multishot snapshot spectral imaging system,” Opt. Lett. 36(14), 2692–2694 (2011). [CrossRef] [PubMed]

]. In a similar framework, August and Stern [21

21. Y. August and A. Stern, “Compressive sensing spectrometry based on liquid crystal devices,” Opt. Lett. 38(23), 4996–4999 (2013). [CrossRef] [PubMed]

] use a liquid crystal device and a single pixel sensor for compressive spectrometry. However, these works are mainly limited to reflection spectrum and are inapplicable for fluorescence capturing. This paper shows that fluorescent and reflective components are both highly redundant but in different forms and proposes to reconstruct them from sparsely captured measurements.

2.3. Multiplexing capturing

3. Formulation

3.1. Derivation of the optimization for a single pixel

Discretizing the spectrums of incident (excitation) and outgoing (fluorescence and reflectance) light into m and n levels, respectively, for each scene point, we can represent its high-spectrum fluorescence as matrix m×n and reflectance as m×n. Suppose we shed illumination combinations and capture accumulated responses m×n, which are composed of three components,
M^=R^+F^+N^,
(1)
with being the noise, which we assume follows Gaussian white noise N(μ, σ2).

Roughly, fluorescence is of a longer wavelength than that of the excitation illumination, and its spectral distribution is generally independent of the excitation illumination except for a scaling factor. Therefore we can assume no overlap between excitation and emission spectrums, as shown in Fig. 2(a). However, for many materials the fluorescence spectrum overlaps with that of excitation illuminations, as shown in Fig. 2(b); the separation of reflective and fluorescent components in the overlapped wavelength is worth studying and helps accurate appearance modeling. This paper deals with the general case with overlapping between reflectance and fluorescence and treats the form with no overlapping as a special case. In such cases, we need to introduce priors of both fluorescent and reflective components for unmixing since we cannot discriminate two components by wavelength thresholding in the overlapping range. (i) We assume that reflectance does not change the spectrum of the incident light, and the reflected components form a diagonal matrix, which is naturally sparse, as shown in Fig. 2(b). (ii) The independence between the distribution of emitted fluorescence and the excitation spectrum indicates that is a low-rank coefficient matrix whose rank is 1 if the Kasha–Vavilov rule is strictly followed.

Fig. 2 The visualization of excitation–emission matrix. The strength of the matrix entries is illustrated by intensity here. Vertical and horizontal color bars respectively illustrate the excitation wavelength λin and the emission wavelength, which includes the fluorescent component λoutref and reflective component λoutfluo. (a) No overlap between excitation and emission. (b) Slight overlap between excitation and emission.

As previously mentioned, fluorescence behaviors need a bispectral description, and we can adopt a coding paradigm on both excitation and emission sides. To raise the efficiency and quality of capturing simultaneously, we use multiplexed excitation illumination to raise SNR and random subsampling on the emission side to reduce the number of necessary measurements.

Suppose we shed p coded illuminations and record q narrowbands at CCD; the illumination code and recording code are respectively Ip×m and On×q. Accordingly, the reconstruction can be performed by the following optimization:
(F^*,R^*,N^*)=argminF^*+αR^1s.t.πΩ^(C^)=πΩ^(I^(F^+R^)O^+N^)|N^μ|<3σ.
(2)
Here || · ||* is the nuclear norm for rank minimization; || · ||l1 is the 1 norm, which has been widely used to force the sparsity of ; α is a weighting factor introduced to balance energy terms describing two priors; Ĉ(i, j) is the measurement from the ith illumination pattern in Î and the jth recording wavelength in Ô; and πΩ: ℝp×q → ℝp×q is a linear operator that subsamples the entries out of all p × q possible measurements by performing dot product with a binary matrix Ω̂. As for , the three-sigma rule is used to impose the noise constraints.

3.2. From single pixel to image lattice

The formulations in the above subsection are defined on the appearance of one single pixel; capturing the fluorescence and reflectance of a whole image sequentially is still quite time consuming. Here we concatenate the data at w different pixels horizontally as follows:
[C^1C^2C^w]=I^([F^1F^2F^w]+[R^1R^2R^w])[O^O^O^]+[N^1N^2N^w],
(3)
which is further simplified as
C=I(F+R)O+N.
(4)
Here the size of the matrices F, R, N, C is m × (n × w) with w being the number of pixels, I is equivalent to Î, and the coding matrix O turns into a diagonal replication of the n × q coding matrix Ô in Eq. (2).

Because the fluorescence properties of different positions are mostly different, the low rankness of concatenated matrices is destroyed, as illustrated in Fig. 3(a). However, we can introduce scaling factors to normalize the difference and make the batch processing feasible, as shown in Fig. 3(b). Let a = [a1, a2, ···, am], b = [b1, b2, ···, bm], f and g denote four row vectors and [a1f; a2f; ··· ;amf] and [b1g; b2g; ··· ;bmg] denote low-rank fluorescence matrices at two pixels. We normalize them by factor matrix = [a′, a′,···, a′, b′, b′,···,b′] and concatenate them to get a low-rank matrix [f g;f g;··· ;f g]. Here needs to be estimated automatically. In addition, concatenating multiple image pixels will not change the sparsity of the reflective component. Then the optimization for reconstructing fluorescent and reflective components of a whole image can be rewritten as
(F*,R*,N*)=argminF*+αR1s.t.πΩ(C)=πΩ(I(F+R)O+N)|Nμ|<3σ,
(5)
where ⊙ is the component-wise product that for any two matrices A and B, (AB)ij = AijBij. Note that the subsampling matrix Ω is the horizontal replication of Ω̂ in Eq. (2).

Fig. 3 Extending the reconstruction of a single pixel to image lattice by normalization. Here we use different line colors to differentiate excitation wavelengths.

4. Optimization

The optimization defined in Eq. (5) is apparently nonconvex; we propose an iterative algorithm for numeric optimization. To simplify the optimization, following the idea in [27

27. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learning 3(1), 741–755 (2010). [CrossRef]

], we introduce a slack variable ε (∀ij, εij ≥ 0) to convert the inequality |Nμ| < 3σ into an equality constraint (Nμ) ⊙ (Nμ) − 9σ2 + ε2 = 0. In addition, F and R are replaced with S1 and S2, respectively, to get closed-form solutions to F and R. So the objective turns into
min.S1*+αS21s.t.S1=FS2=RC=IFO+IRO+N+E,E(i,j)(i,j)Ω=0,(Nμ)(Nμ)9σ2+ε2=0.
(6)

The problem in Eq. (6) is a typical sparse optimization with equality constraints. There are a number of efficient methods to address the above optimization, e.g., the Proximal Gradient method and Augmented Lagrangian Multiplier. For an extensive review of these methods one may refer to [28

28. A. Yang, S. Sastry, A. Ganesh, and Y. Ma, “Fast-minimization algorithms and an application in robust face recognition: A review,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 1849–1852.

, 29

29. Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,” Artif. Intell. Evol. Comput. Metaheuristics 427, 345–369 (2012). [CrossRef]

]. Here we prefer Alternating Direction Minimization (ADM) due to its efficiency and effectiveness [27

27. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learning 3(1), 741–755 (2010). [CrossRef]

]. By ADM, Eq. (5) is subject to the following Lagrangian equation:
Lag=S1*+αS21+<Y1,S1F>+β2S1FF2+<Y2,IFO+IRO+N+EC>+β2IFO+IRO+N+ECF2+<Y3,(Nμ)29σ2+ε2>+β2(Nμ)29σ2+ε2F2,
(7)
where < ·,· > denotes the inner product. The matrices F, R and N are optimization variables; matrices Y0∼3 define the Lagrangian multipliers; and the other matrices, e.g., I, O, and N, are all known. The above objective is analytically tractable by Distributed Optimization as used in [28

28. A. Yang, S. Sastry, A. Ganesh, and Y. Ma, “Fast-minimization algorithms and an application in robust face recognition: A review,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 1849–1852.

]. To solve Eq. (7), we need to derive the update rules for all the unknowns. In the following derivations of updating rules, we omit the superscript (k) or (k + 1) on the right-hand side of the derivation.

For S1, the Lagrangian equation can be rewritten as
f(S1)=S1*+β2S1(Fβ1Y1)F2+C,
(8)
where C is irrelevant to S1. According to [30

30. Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” in Technical Report UILU-ENG-09-2215 (UIUC, 2009).

], the update rule of such a nuclear norm optimization can be written as
S1(k+1)=Usβ1(Stemp)VT,
(9)
where UStempVT is the Singular Value Decomposition of (Fβ−1Y1) and
sβ1(x)={xβ1,x>β1x+β1,x<β10,others.
(10)

Similarly, we rewrite the Lagrangian equation for S2 as
f(S2)=αS21+β2S2(Rβ1Y0)F2.
(11)
Referring to the solution to 1 problem in [30

30. Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” in Technical Report UILU-ENG-09-2215 (UIUC, 2009).

], S2 can be updated as
S2(k+1)=sαβ1(Rβ1Y0).
(12)

For E and ε, we set the Lagrangian equation’s partial derivative to be zero and thus obtain the updated value. The Lagrangian equation’s partial derivative to E is
f(E)E=β[E(CIFOIRONβ1Y2)],
(13)
and the update rule is
E(k+1)=CIFOIRONβ1Y2.
(14)
Similarly, we can get ε’s update rule,
ε(k+1)=9σ2(Nμ)(Nμ)β1Y3.
(15)

Algorithm 1: Reconstruct reflective and fluorescent components in a scene.

table-icon
View This Table

As for F, R and N, it is difficult to obtain the closed-form solution to the three equations; we use gradient descent method [29

29. Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,” Artif. Intell. Evol. Comput. Metaheuristics 427, 345–369 (2012). [CrossRef]

] for updating
F(k+1)=F(k)γ1f(F)FR(k+1)=R(k)γ2f(R)RN(k+1)=N(k)γ3f(N)N,
(16)
where γ1∼3 represents the step size parameters. The corresponding partial derivatives are respectively
f(F)F=β[F2+ITIFOOT(S1+β1Y1)IT(CIRONEβ1Y2)OT]f(R)R=β[R+ITIROOT(S2+β1Y0)IT(CIFONEβ1Y2)OT]f(N)N=β[2(Nμ)32(Nμ)(9σ2ε2β1Y3)+N(CIFOIROEβ1Y2)].

The update of the auxiliary variables Y0∼3 can be derived in closed form as listed in Algorithm 1, and the other variables are kept constant during the optimization: ρ = 1.05, β = 1e − 2 and βmax = 1e6.

5. Experiments

In Figs. 1(a) and 4, we respectively show the prototype and its light path. The data are captured in a dark room to avoid the interference from ambient light; we can also seal the whole setup instead. The multiplexed excitation illumination is implemented by placing a VariSpec liquid crystal tunable filter in front of a xenon lamp. For compressive sampling, we add the same filter in front of the camera lens of Point2Grey FL2G-13S2C-C, which is synchronized with the filters automatically. VariSpec transmission is wavelength dependent, so we normalize the final results by scaling according to the transmittance curve provided by the supplier.

Fig. 4 The light path of the proposed imaging system. The corresponding real setup is shown in Fig. 1(a).

In the following experiments, we evenly discretize the wavelength from 400 nm to 700 nm into 10 levels. Considering that the excitation illuminations are of shorter wavelengths, we only vary the excitation illumination from 400 nm to 580 nm. One can easily figure out that the excitation–emission matrix is of dimension 6 × 10, and traverse sampling needs 45 shots here.

5.1. Synthetic data

To test the performance of the proposed approach quantitatively, we first capture the ground truth reflectance and fluorescence of several fluorescent materials adopting the exhaustive strategy, i.e., traversing all possible narrowband pairs without the combinations with emission wavelength shorter than that of excitation. Here we average multiple shots for each setting to exclude the effects from noise.

Later, the noise-free coded measurements are simulated by summing up multiple responses under narrowband excitation illuminations and subsampling the emission spectrums. Then we impose noise according to the rule [24

24. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Pattern Anal. Mach. Intell. 29(8), 1339–1354 (2007). [CrossRef]

] that the noise level increases linearly with the number of photons, with the noise parameters estimated from the above repetitive acquisition. Because the optimum multiplexing codes proposed in [24

24. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Pattern Anal. Mach. Intell. 29(8), 1339–1354 (2007). [CrossRef]

, 31

31. M. Harwit and N. J. A. Sloane, Hadamard Transform Optics (Academic, 1979).

] are applicable only if (m + 1)/4 is an integer, we use full-rank random multiplexing codes in this paper without loss of generality. The optimum rate for coded illumination is around 50% according to [24

24. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Pattern Anal. Mach. Intell. 29(8), 1339–1354 (2007). [CrossRef]

, 31

31. M. Harwit and N. J. A. Sloane, Hadamard Transform Optics (Academic, 1979).

]. For sampling rates, we compare the reconstructed fluorescence at different sampling rates against the true values. The results from three randomly selected points are plotted in Fig. 5.

Fig. 5 The performance on synthetic data, including three materials (horizontal) and three sampling rates (vertical). These results are averaged over five different random codes.

From the results we can see that our algorithm can recover the fluorescence at full wavelength resolution successfully from subsampled measurements and the algorithm is applicable for different cases. Comparing the performance at different sampling rates, one can observe that there exists slight degeneration in accuracy as the number of measurement decreases, but the high accuracy at 30% sampling rate is still quite promising.

5.2. Real data

In this experiment we capture the coded measurements of a scene including both reflective and fluorescent components using the prototype and reconstruct two components computationally. Since the filter can only generate a single 20 nm narrowband, we code the spectrum temporally to generate multiplexed illuminations. In data capture, we set the sampling rate to be 30%; that is, 15 snapshots are taken compared to 45 for the traverse capturing. For data capturing, we bought some fluorescent toys coated with fluorescent paint (such as the balls and car) and also created one by writing several letters on a background with either MK1800 fluorescent pigments or nonfluorescent paint. We selected the fluorescent paint or toys that could be excited by short-wavelength visible light.

Figure 6 gives a comparison between the results by demultiplexing and those by traditional capturing. In the left and middle columns, we respectively display the reflective and fluorescent components at specific excitation–emission spectrum pairs. From the comparison one can see that the proposed approach can recover both components successfully while the noise is suppressed largely, especially in the dark regions. We also show a comparison between the simulated image under a specific multiplexed illumination and that by exhaustive capturing in the rightmost column. The high similarity also validates the accuracy of our reconstruction. In addition, we can see that the proposed algorithm is applicable for the regions either with or without fluorescence. To quantitatively measure the benefits of multiplexing, we average over multiple measurements under the above three illuminations as ground truth and compare the mean absolute percentage error of reconstruction, as labeled in Fig. 6. The consistently smaller reconstruction error clearly validates the multiplexing strategy.

Fig. 6 Performance of our algorithm on a real scene and comparison with that of traverse capturing. The left column contains only reflectance, the middle column is the fluorescence excited by a single band illumination, and the right column gives the result under a mixture-spectrum illumination. The mean absolute percentage error (MAPE) of each result is labeled in the top right corner.

Another noticeable phenomenon is that the tiger stuffed toy on the right is not fluorescent itself, but there is still an apparent fluorescent component in the data by both traverse capturing and our approach. This is caused by the physical light transport: the fluorescent emission is similar to diffuse reflection, and the fluorescence emitted from the truck is shed on the nonfluorescent tiger, so the fluorescence is reflected into the camera.

We also compare our reconstructed results with those obtained by traversing capturing quantitatively at three randomly selected points, as plotted in Fig. 7. The small difference between true value (solid curve) and prediction (dashed curve) validates the effectiveness of our approach. In each subfigure, we can also see that there exists a large difference among the curves of different excitation illuminations on the left half, and the consistency increases with the excitation wavelength. The reason is that the fluorescence part is low rank and of longer wavelength, so the consistency gains dominance at longer wavelength.

Fig. 7 Quantitative evaluation on real data. Here three representative points are selected, and we differentiate the excitation illuminations with different colors.

5.3. Advantages in other applications

5.3.1. Relighting

The model in this paper describes the fluorescent and reflective properties of a scene at each incident wavelength and each outgoing wavelength, from which we can easily perform photo-realistic relighting according to the rendering model [5

5. G. M. Johnson and M. D. Fairchild, “Full-spectral color calculations in realistic image synthesis,” IEEE Comput. Graphics Appl. 19(4), 47–53 (1999). [CrossRef]

, 32

32. A. Lam and I. Sato, “Spectral modeling and relighting of reflective-fluorescent scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1452–1459.

]. Except for synthesizing the appearance under various illumination coding patterns as shown in Fig. 7(c), the proposed model can also generate images under different light sources given their spectrum distributions. Figures 8(a)–8(c) respectively display the relighting results of a fluorescent scene illuminated by three different light sources, whose light spectra are calibrated using the VariSpec liquid crystal tunable filter and a digital camera. The top row of Fig. 8 shows the true appearances (including both fluorescence and reflectance) under three illuminations, in parallel with which we display the corresponding reconstructed appearance. Their high degree of similarity validates the high performance of our rendering results effectively. In the bottom row, we also show the relighting results with only the fluorescence component. The results clearly show that the letters C and E are nonfluorescent.

Fig. 8 Relighting results under three different types of light sources and comparison with true results. (a) Noon sunlight. (b) Tungsten lamp. (c) Mercury vapor lamp.

5.3.2. Segmentation

Fig. 9 Segmentation assisted by high-spectral-resolution fluorescent components. (a) A scene under daylight. (b) RGB values of five labeled regions. (c) Top three discriminative features between regions 1 and 2. (d) Segmentation of car parts. (e) Top three discriminative features among regions 3, 4, and 5. (f) Segmentation of toy ball and fluorescent paint.

Using two-tuples to denote the excitation–emission wavelength pairs, we visually select the top three discriminative features—(430 nm, 550 nm), (430 nm, 610 nm), and (460 nm, 610 nm)—and adopt simple hard thresholding for region labeling. One can see clear separation between the door and body of the car, as shown in Figs. 9(c) and 9(d). The fluorescence behaviors of the ball and the painted round region are also different; we can separate region 4 from regions 3 and 5 easily. However, regions 3 and 5 are undistinguishable even in high-dimensional excitation–emission space, as shown in Figs. 9(e) and 9(f); this is mainly due to the fact that the emitted fluorescence from the background is shed on the left part of the ball, which thus exhibits similar fluorescence behaviors.

Recall that we capture the reflectance and fluorescence spectrum at each pixel, and fluorescence can be either emitted by single material or mixture material. However, we cannot perform unmixing on the composite emission, which needs additional information from the behaviors of the componential fluorescence materials.

6. Conclusions and discussions

6.1. Conclusions

We represent an approach for capturing fluorescent and reflective components in a scene efficiently and with high quality. The proposed approach is promising in many applications making use of accurate fluorescence measurements and is applicable for general cases—reflective, fluorescent, or both. We also validate the approach with a prototype and apply it successfully to several computer vision tasks.

6.2. Limitations and potential extensions

Our algorithm is mainly limited by the Gaussian assumption on the system noise. Because noises on CCD are highly complicated (e.g., photon noise, dark noise, and read noise) and can be signal dependent, Gaussian white noise is not accurate enough for higher-quality acquisition. Considering more complex noise model is beneficial for theoretical analysis and system building, so this can be one of our directions in the future.

Furthermore, the efficiency and accuracy of current implementation is limited by the tunable filter and light source. For the adopted filter, each wavelength transition needs around 50 ms, and the narrowband filter is Gaussian shaped, so the prototype can be improved further by using some higher-end optics. A light source with even spectrum will also be preferable. So far, the dependence between adjacent pixels is not introduced, and another extension is introducing spatial constraints to raise the efficiency and quality further. The definition of spatial smoothness constraints on reflectance and fluorescence can be borrowed from that of natural images.

Fluorescence is of great importance in microscopy imaging; an extension of our approach to microcopy can obtain a high-resolution excitation-fluorescence description of the object. First, the weakness of fluorescence emission makes the multiplexing strategy specially important. Second, the filters are applied to the whole light source and CCD without technically demanding modifications, so that it can be extended easily to microscopy systems. The direct extension is nontrivial due to the uniqueness of microspecimens, e.g., the bleaching due to scattering. Combining some microimaging strategies such as confocal techniques may help in addressing such problems but is beyond the scope of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China, Nos. 61171119, 61120106003, and 61327902. The authors thank Prof. Imari Sato, Prof. Yochi Sato for their constructive discussions, and also wish to thank the editor and the anonymous reviewers for their insightful comments on the manuscript.

References and links

1.

R. Donaldson, “Spectrophotometry of fluorescent pigments,” Br. J. Appl. Phys. 5(6), 210–214 (1954). [CrossRef]

2.

I. Sato and C. Zhang, “Image-based separation of reflective and fluorescent components using illumination variant and invariant color,” IEEE Trans. Pattern Anal. 35(12), 2866–2877 (2013). [CrossRef]

3.

A. D. McNaught and A. Wilkinson, Compendium of Chemical Terminology (Blackwell Science, 1997).

4.

A. Springsteen, “Introduction to measurement of color of fluorescent materials,” Anal. Chim. Acta 380(2), 183–192 (1999). [CrossRef]

5.

G. M. Johnson and M. D. Fairchild, “Full-spectral color calculations in realistic image synthesis,” IEEE Comput. Graphics Appl. 19(4), 47–53 (1999). [CrossRef]

6.

M. B. Hullin, J. Hanika, B. Ajdin, H.-P. Seidel, J. Kautz, and H. P. A. Lensch, “Acquisition and analysis of bis-pectral bidirectional reflectance and reradiation distribution functions,” ACM Trans. Graphics 29(4), 1–7 (2010). [CrossRef]

7.

M. Soriano, W. Oblefias, and C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express 10(25), 1458–1464 (2002). [CrossRef] [PubMed]

8.

Q. Liu, K. Chen, M. Martin, A. Wintenberg, R. Lenarduzzi, M. Panjehpour, B. F. Overholt, and T. Vo-Dinh, “Development of a synchronous fluorescence imaging system and data analysis methods,” Opt. Express 15(20), 12583–12594 (2007). [CrossRef] [PubMed]

9.

T. Vo-Dinh, “Principle of synchronous luminescence (SL) technique for biomedical diagnostics,” Proc. SPIE 3911, 42–49 (2000). [CrossRef]

10.

I. Sato, T. Okabe, and Y. Sato, “Bispectral photometric stereo based on fluorescence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 270–277.

11.

S. Han, Y. Matsushita, I. Sato, T. Okabe, and Y. Sato, “Camera spectral sensitivity estimation from a single image under unknown illumination by using fluorescence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 805–812.

12.

C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int. J. Comput. Vision 86(2–3), 140–151 (2010). [CrossRef]

13.

M. Alterman, Y. Schechner, and A. Weiss, “Multiplexed fluorescence unmixing,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2010), pp. 1–8.

14.

J. Park, M. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2007), pp. 1–8.

15.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Separating the fluorescence and reflectance components of coral spectra,” Appl. Opt. 40(21), 3614–3621 (2001). [CrossRef]

16.

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef] [PubMed]

17.

A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef] [PubMed]

18.

R. Horisaki, X. Xiao, J. Tanida, and B. Javidi, “Feasibility study for compressive multi-dimensional integral imaging,” Opt. Express 21(4), 4263–4279 (2013). [CrossRef] [PubMed]

19.

R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18(22), 23041–23053 (2010). [CrossRef] [PubMed]

20.

Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, “Development of a digital-micromirror-device-based multishot snapshot spectral imaging system,” Opt. Lett. 36(14), 2692–2694 (2011). [CrossRef] [PubMed]

21.

Y. August and A. Stern, “Compressive sensing spectrometry based on liquid crystal devices,” Opt. Lett. 38(23), 4996–4999 (2013). [CrossRef] [PubMed]

22.

G. Wetzstein, I. Ihrke, and W. Heidrich, “On plenoptic multiplexing and reconstruction,” Int. J. Comput. Vision 101(2), 384–400 (2013). [CrossRef]

23.

N. Ratner and Y. Y. Schechner, “Illumination multiplexing within fundamental limits,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2011), pp. 1–8.

24.

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Pattern Anal. Mach. Intell. 29(8), 1339–1354 (2007). [CrossRef]

25.

C. Chen, D. Vaquero, and M. Turk, “Illumination demultiplexing from a single image,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2011), pp. 17–24.

26.

F. Moreno-Noguer, S. Nayar, and P. Belhumeur, “Optimal illumination for image and video relighting,” in Proceedings of IEE European Conference on Visual Media Production (IEE, 2005), pp. 201–210.

27.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learning 3(1), 741–755 (2010). [CrossRef]

28.

A. Yang, S. Sastry, A. Ganesh, and Y. Ma, “Fast-minimization algorithms and an application in robust face recognition: A review,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 1849–1852.

29.

Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,” Artif. Intell. Evol. Comput. Metaheuristics 427, 345–369 (2012). [CrossRef]

30.

Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” in Technical Report UILU-ENG-09-2215 (UIUC, 2009).

31.

M. Harwit and N. J. A. Sloane, Hadamard Transform Optics (Academic, 1979).

32.

A. Lam and I. Sato, “Spectral modeling and relighting of reflective-fluorescent scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1452–1459.

OCIS Codes
(300.6280) Spectroscopy : Spectroscopy, fluorescence and luminescence
(110.1758) Imaging systems : Computational imaging
(110.4234) Imaging systems : Multispectral and hyperspectral imaging

ToC Category:
Imaging Systems

History
Original Manuscript: September 12, 2013
Revised Manuscript: December 19, 2013
Manuscript Accepted: January 5, 2014
Published: January 17, 2014

Virtual Issues
Vol. 9, Iss. 3 Virtual Journal for Biomedical Optics

Citation
Jinli Suo, Liheng Bian, Feng Chen, and Qionghai Dai, "Bispectral coding: compressive and high-quality acquisition of fluorescence and reflectance," Opt. Express 22, 1697-1712 (2014)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-22-2-1697


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. R. Donaldson, “Spectrophotometry of fluorescent pigments,” Br. J. Appl. Phys. 5(6), 210–214 (1954). [CrossRef]
  2. I. Sato, C. Zhang, “Image-based separation of reflective and fluorescent components using illumination variant and invariant color,” IEEE Trans. Pattern Anal. 35(12), 2866–2877 (2013). [CrossRef]
  3. A. D. McNaught, A. Wilkinson, Compendium of Chemical Terminology (Blackwell Science, 1997).
  4. A. Springsteen, “Introduction to measurement of color of fluorescent materials,” Anal. Chim. Acta 380(2), 183–192 (1999). [CrossRef]
  5. G. M. Johnson, M. D. Fairchild, “Full-spectral color calculations in realistic image synthesis,” IEEE Comput. Graphics Appl. 19(4), 47–53 (1999). [CrossRef]
  6. M. B. Hullin, J. Hanika, B. Ajdin, H.-P. Seidel, J. Kautz, H. P. A. Lensch, “Acquisition and analysis of bis-pectral bidirectional reflectance and reradiation distribution functions,” ACM Trans. Graphics 29(4), 1–7 (2010). [CrossRef]
  7. M. Soriano, W. Oblefias, C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express 10(25), 1458–1464 (2002). [CrossRef] [PubMed]
  8. Q. Liu, K. Chen, M. Martin, A. Wintenberg, R. Lenarduzzi, M. Panjehpour, B. F. Overholt, T. Vo-Dinh, “Development of a synchronous fluorescence imaging system and data analysis methods,” Opt. Express 15(20), 12583–12594 (2007). [CrossRef] [PubMed]
  9. T. Vo-Dinh, “Principle of synchronous luminescence (SL) technique for biomedical diagnostics,” Proc. SPIE 3911, 42–49 (2000). [CrossRef]
  10. I. Sato, T. Okabe, Y. Sato, “Bispectral photometric stereo based on fluorescence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 270–277.
  11. S. Han, Y. Matsushita, I. Sato, T. Okabe, Y. Sato, “Camera spectral sensitivity estimation from a single image under unknown illumination by using fluorescence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 805–812.
  12. C. Chi, H. Yoo, M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int. J. Comput. Vision 86(2–3), 140–151 (2010). [CrossRef]
  13. M. Alterman, Y. Schechner, A. Weiss, “Multiplexed fluorescence unmixing,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2010), pp. 1–8.
  14. J. Park, M. Lee, M. D. Grossberg, S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2007), pp. 1–8.
  15. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, “Separating the fluorescence and reflectance components of coral spectra,” Appl. Opt. 40(21), 3614–3621 (2001). [CrossRef]
  16. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, T. J. Schulz, “Single-shot compressive spectral imaging with a dual disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef] [PubMed]
  17. A. Wagadarikar, R. John, R. Willett, D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef] [PubMed]
  18. R. Horisaki, X. Xiao, J. Tanida, B. Javidi, “Feasibility study for compressive multi-dimensional integral imaging,” Opt. Express 21(4), 4263–4279 (2013). [CrossRef] [PubMed]
  19. R. Horisaki, J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18(22), 23041–23053 (2010). [CrossRef] [PubMed]
  20. Y. Wu, I. O. Mirza, G. R. Arce, D. W. Prather, “Development of a digital-micromirror-device-based multishot snapshot spectral imaging system,” Opt. Lett. 36(14), 2692–2694 (2011). [CrossRef] [PubMed]
  21. Y. August, A. Stern, “Compressive sensing spectrometry based on liquid crystal devices,” Opt. Lett. 38(23), 4996–4999 (2013). [CrossRef] [PubMed]
  22. G. Wetzstein, I. Ihrke, W. Heidrich, “On plenoptic multiplexing and reconstruction,” Int. J. Comput. Vision 101(2), 384–400 (2013). [CrossRef]
  23. N. Ratner, Y. Y. Schechner, “Illumination multiplexing within fundamental limits,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2011), pp. 1–8.
  24. Y. Y. Schechner, S. K. Nayar, P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Pattern Anal. Mach. Intell. 29(8), 1339–1354 (2007). [CrossRef]
  25. C. Chen, D. Vaquero, M. Turk, “Illumination demultiplexing from a single image,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2011), pp. 17–24.
  26. F. Moreno-Noguer, S. Nayar, P. Belhumeur, “Optimal illumination for image and video relighting,” in Proceedings of IEE European Conference on Visual Media Production (IEE, 2005), pp. 201–210.
  27. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learning 3(1), 741–755 (2010). [CrossRef]
  28. A. Yang, S. Sastry, A. Ganesh, Y. Ma, “Fast-minimization algorithms and an application in robust face recognition: A review,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 1849–1852.
  29. Y. Deng, Q. Dai, Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,” Artif. Intell. Evol. Comput. Metaheuristics 427, 345–369 (2012). [CrossRef]
  30. Z. Lin, M. Chen, L. Wu, Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” in Technical Report UILU-ENG-09-2215 (UIUC, 2009).
  31. M. Harwit, N. J. A. Sloane, Hadamard Transform Optics (Academic, 1979).
  32. A. Lam, I. Sato, “Spectral modeling and relighting of reflective-fluorescent scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1452–1459.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited