OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 8, Iss. 8 — Sep. 4, 2013
« Show journal navigation

Improving signal detection in emission optical projection tomography via single source multi-exposure image fusion

Abbas Cheddad, Christoffer Nord, Andreas Hörnblad, Renata Prunskaite-Hyyryläinen, Maria Eriksson, Fredrik Georgsson, Seppo J. Vainio, and Ulf Ahlgren  »View Author Affiliations


Optics Express, Vol. 21, Issue 14, pp. 16584-16604 (2013)
http://dx.doi.org/10.1364/OE.21.016584


View Full Text Article

Acrobat PDF (28077 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We demonstrate a technique to improve structural data obtained from Optical Projection Tomography (OPT) using Image Fusion (IF) and contrast normalization. This enables the visualization of molecular expression patterns in biological specimens with highly variable contrast values. In the approach, termed IF-OPT, different exposures are fused by assigning weighted contrasts to each. When applied to projection images from mouse organs and digital phantoms our results demonstrate the capability of IF-OPT to reveal high and low signal intensity details in challenging specimens. We further provide measurements to highlight the benefits of the new algorithm in comparison to other similar methods.

© 2013 OSA

1. Introduction

OPT is a technique for highly specific three-dimensional (3D) imaging of gene and protein expression patterns in tissue specimens [1

1. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science 296(5567), 541–545 (2002). [CrossRef] [PubMed]

]. By virtue of its capacity to generate high resolution images of specimens on a scale that has been difficult to cover by other imaging modalities, the use of the technology is rapidly expanding in diverse research fields such as developmental biology, diabetes research, pathology and many more. However, in the course of 3D rendering of images generated by OPT, structures or objects spanning a wide range of signal intensities commonly represent specific challenges for the process of image generation. Examples of such structures include vascular or epithelial branches. For example, in OPT assessments of the liver, peripheral blood vessels typically display very low signal intensities and may be difficult to distinguish from the surrounding background. In contrast, vessels in other areas of the organ (such as the portal vein) display very high signal intensities. In order to visualize the peripheral vessels, a longer exposure can be acquired. However, this may lead to over-saturation of other areas (which may also occur when attempting to regulate the dynamic range for the reconstruction). Conversely, optimized exposures of high intensity elements may lead to under representation of low signal intensity areas. Hence, in order to reliably arrive at an adequate qualitative 3D rendering encompassing all labeled structures in this type of challenging specimen, scanned projections must in many cases undergo extensive post-acquisition image processing.

Image enhancement is a process applied to digital images that makes otherwise invisible features visible. Generally, the success rate of image enhancement is subjective, i.e. task dependent, but it can be measured by an objective function such as how well the signal is detected in a scan (our principle focus in this study). Hence, enhancement techniques are most commonly problem oriented [2

2. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Pearson/Prentice Hall, 2008).

]. Thus, a functional method to enhance OPT data would be one that yields a better signal-to-noise ratio and a better preservation of morphology. Image fusion is the process of combining two or more registered images acquired under different conditions, or via different detection methods, in order to produce a more descriptive scene. The resulting output would be more suitable for the human visual system (HVS) or machine vision and aid in downstream image processing and analysis tasks than any of the individual input signals [3

3. Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proc. IEEE 87(8), 1315–1326 (1999). [CrossRef]

].

In this report we propose a method to overcome the shortcomings of OPT for imaging specimens containing widely differing contrast elements by using image fusion of multi-exposure projections, thereby producing projections that are inherently unbiased towards exposure variations. This fusion approach not only facilitates the visualization of structures with both strong and weak signal intensities. It may also aid in tuning the dynamic range during image reconstruction.

For exposure manipulation of certain specimens, there is a trade-off between capturing weak signals and revealing more detail. Commonly, longer exposure times allow for the detection of weak signals while a shorter exposure may reveal more detail of a high intensity area. The concept of image fusion is widely used in photography, remote sensing, military imaging equipment (comprising near-infrared night vision) etc [4

4. G. Simone, A. Farina, F. C. Morabito, S. B. Serpico, and L. Bruzzone, “Image fusion techniques for remote sensing applications,” Inf. Fusion 3(1), 3–15 (2002). [CrossRef]

]. In biomedical imaging, there is today a tendency towards multimodality (multi-sensor) fusion, e.g., MRI with CT scan data [5

5. G. K. Matsopoulos, S. Marshall, and J. N. H. Brunt, “Multiresolution morphological fusion of MR and CT images of the human brain,” IEEE Proc.Vis Image Sign 141(3), 137–142 (1994).

], PET with CT [6

6. S. Cheng, J. He, and Z. Lv, “Medical image of PET/CT weighted fusion based on wavelet transform,” in Proceedings of IEEE Conference on Bioinformatics and Biomedical Engineering, (IEEE, 2008), pp. 2523–2525. [CrossRef]

], and MRI with PET [7

7. S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models,” Inf. Fusion 11(2), 114–123 (2010). [CrossRef]

], which eventually helps in confirming diseases or outlining treatment procedures. In parallel to the work described herein, attempts have been made to implement high dynamic range (HDR) on OPT [8

8. P. Fei, Z. Yu, X. Wang, P. J. Lu, Y. Fu, Z. He, J. Xiong, and Y. Huang, “High dynamic range optical projection tomography (HDR-OPT),” Opt. Express 20(8), 8824–8836 (2012). [CrossRef] [PubMed]

]. However, there is to the best of our knowledge no research done to-date at the level of uni-modal signal enhancement of biomedical scans through fusion of multi-exposure OPT projections of specimens on the cm scale. The possibility to simultaneously visualize areas of both high and low intensity, in a single specimen, has the potential to open up several research avenues for OPT imaging.

1.1 Related work

Image fusion techniques can be classified into two main groups in both spatial and frequency domains, pixel based fusion and feature based fusion (segmented blobs). The terms HDR imaging and image fusion are sometimes used incorrectly to describe the same process. This confusion can be eradicated if we see the HDR process as the one that tries to retrieve a camera response function from a given set of different exposures in such a way that the resulting intensity values violate any image format as they exceed the allowable ranges, thus the name “high dynamic range”. This is the reason why HDR images are usually exported to non-standard image formats and are converted to low dynamic range formats for display. In contrast, image fusion does not require the retrieval of camera response functions and does not involve exposure parameters in the computation of the final fused image. Therefore, it does not violate the image format (i.e. the intensity values are within the image type range).

Mertens et al. generated normalized weighted maps comprising contrast, saturation and well-exposedness weights of each of the exposures where the conventional averaging was carried out in a pyramid decomposition fashion [17

17. T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: a simple and practical alternative to high dynamic range photograph,” Comput. Graph. Forum 28(1), 161–171 (2009). [CrossRef]

]. Pyramid decomposition has also been described by others (e.g [18

18. A. Toet, “Hierarchical image fusion,” Mach. Vis. Appl. 3(1), 1–11 (1990). [CrossRef]

, 19

19. P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in Proceedings of IEEE Conference on Computer Vision. (IEEE, 1993), pp. 173–182.

].). In a similar approach, Wavelet Decomposition (DWT) was introduced to image fusion at about the same time [20

20. L. J. Chipman, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proceedings of IEEE Conference on Image Processing. (IEEE, 1995), pp. 248–251. [CrossRef]

, 21

21. L. Hui, B. S. Manjunath, and S. K. Mitra, “Multi-sensor image fusion using the wavelet transform,” in Proceedings of IEEE Conference on Image Processing, (IEEE, 1994), pp. 51–55. [CrossRef]

]. Lewis et al. and Nikolov et al. were also in favor of using DWT [22

22. J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Inf. Fusion 8(2), 119–130 (2007). [CrossRef]

, 23

23. S. Nikolov, P. Hill, D. Bull, and N. C. Jah, “Wavelets for image fusion,” in Wavelets in Signal and Image Analysis: From Theory to Practice, A. A. Petrosian and F. G. Meyer, eds. (Springer, 2001), pp. 213–244.

]. Nikolov showed that working in the wavelet domain gives a more desirable outcome in terms of fusion quality. Pyramid based fusion on the other hand suffers from blockiness, over-completeness of transform coefficients and noise interference compared to DWT [11

11. M. Ghantous, S. Ghosh, and M. Bayoumi, “A gradient-based hybrid image fusion scheme using object extraction,” in Proceedings of IEEE Conference on Image Processing, (IEEE, 2008), pp. 1300–1303. [CrossRef]

, 12

12. Y. Yang, D. S. Park, S. Huang, and N. Rao, “Medical image fusion via an effective wavelet-based approach,” EURASIP J. Adv. Signal Process. 2010, 1–14 (2010). [CrossRef]

]. Empirical mode decomposition (EMD) is another method that has been proposed in the context of image fusion [24

24. D. Looney and D. P. Mandic, “Multi-scale image fusion using complex extensions of EMD,” IEEE Trans. Signal Process. 57(4), 1626–1630 (2009). [CrossRef]

, 25

25. D. Mandic, M. Golz, A. Kuh, D. Obradovic, T. Tanaka, G. Souretis, W. Leong, D. Looney, and M. Hulle, “Complex Empirical Mode Decomposition for Multichannel Information Fusion,” in Signal Processing Techniques for Knowledge Extraction and Information Fusion (Springer, 2008), pp. 243–260.

]. An advantageous feature of EMD is its ability to adaptively separate spatial frequencies, but since it is a data-driven method there is no guarantee that decomposition of multiple sources match in terms of number or frequency [24

24. D. Looney and D. P. Mandic, “Multi-scale image fusion using complex extensions of EMD,” IEEE Trans. Signal Process. 57(4), 1626–1630 (2009). [CrossRef]

].

As for HDR imaging, retrieving camera properties can also be used in the context of image fusion. Camera response functions are normally not disclosed by camera manufacturers [10

10. A. O. Akyuz and E. Reinhard, “Noise reduction in high dynamic range imaging,” J Vis Commun Image R 18(5), 366–376 (2007). [CrossRef]

]. Nevertheless, Debevec and Malik propose recovering the camera response function with which multiple exposures can be fused into a single high dynamic range radiance map [26

26. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, (ACM Press/Addison-Wesley Publishing Co, 1997) pp. 369–378. [CrossRef]

]. The benefit sought after in recovering the response function is far less of an issue for OPT data as the scanning environment, i.e. within the OPT scanner setup, is well controlled. Hence, rather than delving into the computational complexity of HDR, low dynamic range fusion appears sufficient in the case of OPT. Moreover, the HDR method requires prior knowledge of the exposure durations (scalar variables) of each of the images.

2. Materials and methods

2.1 Specimen preparation and OPT scanning

Mouse liver tissue (lobus sinister lateralis) was obtained from adult C57BL/6 mice (Taconic) and kidneys were derived from neonatal wild-type and Wnt4 null mutant mice, respectively [27

27. K. Stark, S. Vainio, G. Vassileva, and A. P. McMahon, “Epithelial transformation of metanephric mesenchyme in the developing kidney regulated by Wnt-4,” Nature 372(6507), 679–683 (1994). [CrossRef] [PubMed]

]. Both tissue types were immunohistochemically processed and prepared for OPT scanning essentially as described [28

28. T. Alanentalo, A. Asayesh, H. Morrison, C. E. Lorén, D. Holmberg, J. Sharpe, and U. Ahlgren, “Tomographic molecular imaging and 3D quantification within adult mouse organs,” Nat. Methods 4(1), 31–33 (2007). [CrossRef] [PubMed]

]. Smooth muscle expressing vessels of the liver were labeled with Cy3-conjugated anti-smooth muscle α-actin antibodies (Sigma). The ureteric branches of developing kidney were labeled with antibodies against Troma-I (developed by P. Brulet and R. Kemler and obtained from the Developmental Studies Hybridoma Bank developed under the auspices of the NICHD and maintained by The University of Iowa, Department of Biology, Iowa City, IA 52242) and visualized with an Alexa594 conjugated secondary antibody (Molecular Probes). All animal experimentation was conducted in consistence with international, national and institutional laws and guidelines at Umeå University and Oulu University, respectively. The specimens were scanned using the Bioptonics 3001 OPT scanner (Bioptonics) and tomographic projections were reconstructed using the NRecon software (SkyScan) as described previously [29

29. A. Cheddad, C. Svensson, J. Sharpe, F. Georgsson, and U. Ahlgren, “Image processing assisted algorithms for optical projection tomography,” IEEE Trans. Med. Imaging 31(1), 1–15 (2012). [CrossRef] [PubMed]

]. The data in Fig. 9 was acquired using the original setup for OPT scanning [1

1. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science 296(5567), 541–545 (2002). [CrossRef] [PubMed]

] fitted with an Andor iKon-M DU934N-BV camera (Andor Technology) [30

30. A. U. Eriksson, C. Svensson, A. Hörnblad, A. Cheddad, E. Kostromina, M. Eriksson, N. Norlin, A. Pileggi, J. Sharpe, F. Georgsson, T. Alanentalo, and U. Ahlgren, “Near infrared Optical projection tomography for assessments of β-cell mass distribution in diabetes research,” J. Vis. Exp. (71): e50238 (2013). [PubMed]

]. Iso-surface and volume renderings of the analyzed specimen was performed using the Imaris (v7.2.3, Bitplane, Zurich, Switzerland) or Drishti (v2.2, Australian National University, ANUSF VizLab, Canberra, ACT, Australia) software packages.

In order to perform IF-OPT, three exposures were acquired in the following order:

  • Low (I0): signal in high intensity areas never become saturated,
  • Medium (I1): normal (see Fig. 1
    Fig. 1 Depiction of the principle way to set up the three different exposures for IF-OPT (liver specimen). This is for mere illustration; the actual scanner displays one plot at a time. Initially, the specimen is rotated to locate the brightest area where the scanner displays a saturation plot. For the “normal” exposure the plot should be close to the over saturation level but not exceeding it whereas the high exposure should allow for certain parts of the specimen to be over-saturated (the purpose here is to catch all weak signals at the expense of over-saturating other areas) and finally the low exposure should be at or below the middle range. Note, these superimposed plots form a general guideline and saturation may vary depending on the examined specimen and the level of staining.
    ) single exposure time setting, and
    Fig. 1Depiction of the principle way to set up the three different exposures for IF-OPT (liver specimen). This is for mere illustration; the actual scanner displays one plot at a time. Initially, the specimen is rotated to locate the brightest area where the scanner displays a saturation plot. For the “normal” exposure the plot should be close to the over saturation level but not exceeding it whereas the high exposure should allow for certain parts of the specimen to be over-saturated (the purpose here is to catch all weak signals at the expense of over-saturating other areas) and finally the low exposure should be at or below the middle range. Note, these superimposed plots form a general guideline and saturation may vary depending on the examined specimen and the level of staining.
  • High (I2): weakest specific signal clearly visible.

Scanner settings applied to all samples in this study, using the Bioptonics 3001 scanner, were: rotation degree 0.45, resolution 1024 x 1024 pixels, pixel sizes were 18μm for liver lobes and 3.3μm for the kidney. For Fig. 9 the scanner settings were: rotation degree 0.9, resolution 1024 x 1024 pixels and pixel size 29.87µm. The listed pixel sizes denote the value that a pixel corresponds to in the respective projection views. Acquired data were processed as described [29

29. A. Cheddad, C. Svensson, J. Sharpe, F. Georgsson, and U. Ahlgren, “Image processing assisted algorithms for optical projection tomography,” IEEE Trans. Med. Imaging 31(1), 1–15 (2012). [CrossRef] [PubMed]

] in addition to the IF-OPT method discussed hereafter.

2.2 Proposed method (IF-OPT)

The current study proposes a contrast limited adaptive histogram equalization (CLAHE)-driven image fusion method since first order statistics manipulation, i.e. histogram equalization, can help in contrast enhancement of the scanned signal structure. The net effect will be contrast-enhanced images in the sense that the intensity levels span a wider range in the intensity scale and exhibit an almost uniform probability density function (PDF).

It is well known that global histogram equalization can give unfavorable results knowing that weakly stained objects are statistically undistinguishable globally but tend to be more visible locally [2

2. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Pearson/Prentice Hall, 2008).

]. This fact suggests using an adaptive histogram equalization approach, which could cater for local statistics and thus overcoming the limitations of standard histogram equalization. CLAHE is a very powerful algorithm, which encapsulates a local operation based on a user-defined window. It comes as an improvement to Adaptive Histogram Equalization (AHE) and it has previously been applied to medical imaging [32

32. E. D. Pisano, S. Zong, B. M. Hemminger, M. DeLuca, R. E. Johnston, K. Muller, M. P. Braeuning, and S. M. Pizer, “Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms,” J. Digit. Imaging 11(4), 193–200 (1998). [CrossRef] [PubMed]

34

34. S. M. Pizer, J. B. Zimmerman, and E. V. Staab, “Adaptive grey level assignment in CT scan display,” J. Comput. Assist. Tomogr. 8(2), 300–305 (1984). [PubMed]

]. The algorithm is initiated by dividing the image into non-overlapping blocks; the contrast is then estimated in each block and constrained by limiting the slope of the cumulative distribution function (CDF) in order to avoid amplification of noise in homogeneous areas, hence the name “contrast-limited”. Artificially induced boundaries, e.g. blocking effects, are eliminated by combining neighboring tiles using bilinear interpolation, e.g. deblocking (see Fig. 2
Fig. 2 A generic illustration of the mechanics underlying CLAHE. (Top) showing the two major intensity transformation involved, T1, a local adaptive histogram equalization with a clip-limit (will be discussed later) and T2, a bilinear interpolation which removes the blocking effect by interpolating between adjacent blocks’ boundaries, and (bottom) showing the interaction formed by CLAHE with two different tile sizes as in Eqs. (1)-(2).
).

Let, I0 I1 and I2 be the underexposed, normalized (reference) and overexposed images respectively. Then a compound CLAHE adaptation is imposed according to:
I^=i=02ωih¯(IiC).
(1)
where h¯(Ii)=[ImaxiCIminiC]ρ(f)+IminiC, I^ is the weighted average of the input data after applying the first CLAHE transform, IC denotes the complement image of I (Note, the Bioptonics 3001 OPT scanner exports the projection data inverted, i.e., where the background is white, which necessitates re-inverting them to match the display in Fig. 1 prior to applying the IF-OPT), represents the CDF (cumulative distribution function) denoted by ρ(f)=ρ(Xx)=0xf(t)dt where f is the probability density function approximated by the intensity histogram, X is a variate and x is a number (i.e., the observed intensity value).

In Eq. (1),ωi are weighting coefficients and these values are optimized using a standard eigen-decomposition method (eig). Let A, B, C be treated as feature vectors corresponding to the 2D input matrices h¯(Ii),i{0,1,2},then we have the decomposition [Vλ]=eig(cov[A,B,C]) and the weights are then derived from ω=(Vindex/Vindex), where index is the ith column in the eigenvectors (V) corresponding to λmax (dominant eigenvalue) entry in λ, cov[.] is the 3x3 covariance matrix. The obtained weights are assigned in the following order:
h¯(I0)ωAverage,h¯(I1)ωMax,h¯(I2)ωMin.
That is to say, h¯(I0) is multiplied by the average value in ω, h¯(I1) –the reference projection- is multiplied by the maximum value in ω and h¯(I2) is multiplied by the minimum value in ω. In the case of the liver in Fig. 6 the ωi,i{0,1,2} are calculated to: 0.29, 0.57, 0.14, respectively.

The histogram of the fused image, at this stage, approximately matches a histogram specified by a uniform distribution [35

35. K. Zuiderveld, “Contrast Limited Adaptive Histogram Equalization,” in Graphic Gems IV, S. H. Paul, ed. (Academic Press Professional, Inc., 1994), pp. 474–485.

]. Hereafter, the resulting distribution needs to be re-modeled in order to squeeze the histogram and shift it toward the mean intensity, i.e. 127.5 for 8-bit images, forming a bell-like shape [35

35. K. Zuiderveld, “Contrast Limited Adaptive Histogram Equalization,” in Graphic Gems IV, S. H. Paul, ed. (Academic Press Professional, Inc., 1994), pp. 474–485.

]:
h¯¯(I^)=I^min+2δ2ln(11ρ(f^)).
(2)
where, δ is a contrast regulator parameter. It is a nonnegative real scalar specifying a distribution parameter and is equal to 0.4 in our case [35

35. K. Zuiderveld, “Contrast Limited Adaptive Histogram Equalization,” in Graphic Gems IV, S. H. Paul, ed. (Academic Press Professional, Inc., 1994), pp. 474–485.

]. See the effect of these histogram manipulations on histograms in Fig. 3
Fig. 3 A simplified illustration of the effect of the compound histogram equalization and redistribution on three different exposures (16-bit) corresponding to the images shown in Fig. 6. Intensity histograms of the three different exposures are depicted, namely, (a) I0 (low exposure, fine details are not well captured), (b) I1 (best adjusted exposure, a mild improvement in capturing the fine details but at the expense of an equivalent oversaturation) and (c) I2 (high exposure, a great improvement in capturing the details however more pixels are oversaturated), (d) the histogram after applying Eq. (1) on the fused projection of the above three exposures, and (e) after applying Eq. (2) on the result of Eq. (1). As can be seen from the final result in Fig. 6, the main advantages have been achieved; the process exposes very fine details without being trapped in the oversaturated area.
, and on scan data in Fig. 6.

In our experiments with raw scan projection images of size [1024×1024] [210×210], different tile sizes were utilized, t=[2j×2j],j=2,...,8. For a given j the image is thus decomposed into [210j×210j] non-overlapping pixel tiles.

The optimal size of a tile depends on the input image (i.e. how much of the image area that is occupied by the investigated sample and the signal to noise ratio) and is likely to be application oriented. Owing to our extensive experimentations on OPT data we suggest, that as long as the sample is large enough to occupy most of the FOV the optimal size of a tile for the output of the scanner is [24×24]. Therefore the image was blocked into [64×64] tiles in Eq. (2) (the outer processing layer) and the block size is reduced by half in Eq. (1) (the inner processing layer) resulting in the images being blocked into [32×32] tiles.

If the sample is considerably smaller than the FOV the tile size (t) has to be adjusted according to the physical size of the sample within the projected image. It is observed that choosing higher values for the tiles, e.g., when j = 7, can cause an effect similar to a phenomenon known as “blooming” which occurs when a charge or light at highly saturated sites on the imaging surface spills over and affects values at neighboring sites [26

26. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, (ACM Press/Addison-Wesley Publishing Co, 1997) pp. 369–378. [CrossRef]

].

The pixel intensities in a block are then calculated and described by the CDF for that block. To limit the contrast and to avoid saturation and enhancement of background noise, a clip limit Eq. (3) is set for the histogram that constrains the permissible contrast [36

36. C.-J. Du and D.-W. Sun, “Retrospective shading correction of confocal laser canning microscopy beef images for three-dimensional visualization,” Food Bioprocess Tech 2(2), 167–176 (2009). [CrossRef]

],
ClipLimit=φ256+[0.01(φφ256)].
(3)
where, . denotes rounding the value to the nearest integer greater than or equal to that value and [.] denotes rounding the value to the nearest integer. The value of 0.01 is the default. If this value is changed to 1, the operation would result in the standard AHE, φ is derived from the product of the block size and represents the number of pixels per tile. For example, if an image I with the size of [1024×1024], and the number of tiles (t) as discussed above is [16×16], then φ=(1024/16)(1024/16)=642=4096 pixels, finally, the value of 256 in the above equation denotes the number of bins. Artificially induced boundaries are eliminated by combining the neighboring tiles using bilinear interpolation.

At present, it is not feasible for this approach to run in real-time, due to the requirement for three separate exposures. Therefore, the process is implemented as an off-line projection enhancement. It is possible to speed up this process, although not implemented here by operating in the wavelet decomposition domain. This approach must take into account that CLAHE is not designed and optimized to equalize highly compressed coefficients. Furthermore, one should also be aware that it may result in sacrificed data quality due to the fact that the wavelet approximates data using square wave functions (Haar) or polynomial functions (Daubchies) [37

37. L. Choudur, U. Dayal, C. Gupta, and R. Swaminatha, “On wavelet compression and cardinality estimation of enterprise data” (2010), retrieved http://www.hpl.hp.com/techreports/2010/HPL-2010-132.pdf.

].

After applying IF-OPT on the raw scanned data the conventional filtered back-projection (FBP) can be carried out as normal. In our case, the OPT scan’s unitary step of rotation was set to 0.45°. Note that in the implementation of the filtered back-projection process, the background is automatically estimated and a threshold is set accordingly. As so long as the SNR is high the actual intensity of the background can be well estimated. It is highly unlikely that noise spots overlap in all 360° projections, since this would result in ring artifacts, which are not present in our processed samples.

3. Results

3.1 Performance of IF-OPT on a digital phantom

Projection images of a 3D digital phantom were synthetically constructed to assess the performance of the algorithm. The digital phantom consisted of a hollow rotational symmetric tube-structure with a gradient intensity, with a low intensity at one end and high intensity at the other end. By simulating quantification into different exposures, making sure to include over saturation and simulated noise (Poisson), three different sets of projection images of the tube was generated, see Figs. 4(a)
Fig. 4 Structural analysis of a digital phantom. (a)-(d), Frames from the digital phantom showing three different exposures, (low (a), normal (b) and high (c)) together with the proposed fusion method (d). The “screw-like” topology of the phantom was intentionally introduced to help contrast the efficiency of the proposed technique with the finest tuned exposure in terms of object structure enhancement. Note that the diameter of the tube and the “ring structures” were made constant across the length of the object. (e) and (f) depict 3D rendering of reconstructions of (b) and (d). The proposed fusion reveals low intensity regions (without over-saturating other areas) and aids in shape preservation. For example, in the IF-OPT reconstruction the diameter of the tubes is more consistent across its length (black arrows in (e) and (f)). Moreover, the ring structures at the bottom of the phantom are deformed (open arrowheads in (e) and (f)) and the uppermost ring structures are essentially absent after 3D reconstruction (black arrowheads in (e) and (f)). “Oversaturated” areas in (a) -(c) are shown in Fig. 5.
Fig. 5 Depiction of over-saturated areas in the digital phantom shown in Fig. 4(a)-4(c), Individual projection views corresponding to low (a), normal (b) and high (c) exposure. (d)-(e), Over-saturated areas in the respective projection view.
-4(c). The low exposure image was chosen so as to make sure that the low-intensity part of the tube was captured with a reasonable contrast. The high exposure image was generated in the same way but with a focus on the high intensity of the tube. In supplementary Fig. 1(d)-1(f), the areas of over-saturation are shown.

As can be seen in Fig. 4(d), some of the noise is carried through to the fused image but due to the fact that the noise is uncorrelated between projections it will not survive the tomographic reconstruction process. Thus, the reconstructed volume using the IF-OPT approach (Fig. 4(f)) is virtually noise free and results in better preservation of object detail (compare Figs. 4(e) and 4(f)).

3.2 Revealing vascular branching morphology in the liver

As far as OPT is concerned, a mouse liver lobe is regarded as a large specimen to fit into the scanner’s field of view. The vascular tree of the liver (in our case labeled for smooth muscle α-actin) typically displays a high to low signal intensity gradient that makes exposure tuning challenging for OPT imaging. The effect of IF-OPT when applied to this problematic specimen is shown in Fig. 6
Fig. 6 Comparative assessment of the effect of IF-OPT versus normal single exposure settings. (a)-(c), Different exposures of a projection at angle 0° of an Adult C57BL/6 (Taconic) left lateral liver lobe (lobus sinister lateralis) labeled for smooth muscle α-actin and (d), the result of applying the proposed IF-OPT method. (a) Depicts the first projection of the low (500ms) single-exposure scan, (b) the first projection of the normal (1500ms) single-exposure, (c) the first projection of the high (2500ms) single-exposure, and (d) the first projection of the IF-OPT fused-exposures. A major gain in signal capture is noticeable when using IF-OPT. (d), (e) 3D reconstruction of the normal exposure (e) and IF-OPT (f) projection data shown in (a)-(d). The clear perceptual advantage of IF-OPT pinpoints the usefulness of the technique in weak signal enhancement, structure highlights and intensity regulation across the specimen (which also facilitates thresholding during reconstruction). Arrowhead in (b) indicates the oversaturated area. Scalebar in (a) corresponds to 1.8mm in (a)–(f).
. Three exposures are acquired with I1 being the best-balanced exposure (from hereafter referred to as “normal”) obtained by a trained biologist/OPT-operator. Obviously, that comes at the cost of over-saturation of certain areas, i.e. in the venous entry area (arrowhead in Fig. 6(b)). This carries another challenge when performing reconstruction of tomographic sections; adjustments of the dynamic range can be considerably biased towards these regions affecting the low signal intensity areas (see Fig. 6(a)). Figure 6(d) is the result of applying the proposed algorithm on the three exposures, and Figs. 6(e)-6(f) illustrate a representative 3D rendering of both the normal and IF-OPT reconstructions.

3.3 Exposing ureteric branching patterns in the kidney

Several organs, including the lungs, pancreas, salivary glands and the kidneys, display complex epithelial branching patterns. Disturbed epithelial branching in a number of organs, which may be caused by genetic factors or exposure to certain drugs during development, is associated with a number of disease conditions. A good example is the kidney, which is also widely used as a model system to study general mechanisms behind epithelial branching and epithelial-mesenchymal interactions in developmental biology. During development of this organ, reciprocal interaction between the nephrogenic mesenchyme and the ureteric bud leads to the formation of the branched ureteric tree, the collecting duct and the ureter of the mature kidney.

Similar to the vascular network of the liver, the kidney impose challenges to the OPT imaging process, and to 3D imaging in general. To test whether it is possible to better resolve the ureteric tree branching patterns in the kidney by the developed approach, IF-OPT was tested against normal single exposure OPT scans of the neonatal kidney from wild-type (wt) and Wnt4-null mutant mice in which the branched epithelium was labeled with Troma-I antibodies that recognize a cytokeratin antigen. The Wnt4 mutants were chosen in part because they display dramatic differences in signal intensities between the proximal and more distal parts of the branching ureteric tree, which normally makes detailed OPT imaging of the entire epithelium challenging, and also to compare the ureteric bud branching pattern to the wild type. As can be deduced from the resulting image data, IF-OPT considerably facilitates imaging of the entire epithelial plexus in kidneys from Wnt4 null mice (compare Figs. 7(a)
Fig. 7 IF-OPT overcomes the trade off in threshold settings for comparative analysis of the ureteric kidney tree in a neonatal Wnt4 null mouse (a)-(f) and its wild type littermate (g), (h). (a)and (e) Volume renderings of normal and IF-OPT reconstructions. (b)-(d), (f), Corresponding iso-surfaces reconstructions using three different thresholds (b)-(d) and a IF-OPT iso-surface reconstruction using a medium threshold (f). (g) shows a IF-OPT iso-surface reconstruction of the normal kidney whereas (h) shows an iso-surface reconstruction of same kidney based on a normal exposure. The branching epithelial network was stained with antibodies against Troma-I and visualized by a Alexa594 secondary antibody. Scalebar in (e) corresponds to 100µm in (a)–(f) and scalebar in (g) corresponds to 200μm in (g) and (h).
, with Fig. 7(e) and Figs. 7(b)-7(d) with 7(f) respectively) and wt litter mates (Figs. 7(g) and 7(h)).

3.4 Human visual system scoring

In order to evaluate the effect of the proposed IF-OPT method on the visual output and to measure its performance in comparison to other methods, 21 trained biologists (trained biologist were defined as researchers with at least a Phd and/or MD in medical and/or biological science) were presented with 10 OPT volume renderings (see Fig. 8
Fig. 8 Comparison of anatomical structures in a mouse liver lobe (vessels labeled for smooth muscle α-actin) using different fusing methods on the exposures shown in Fig. 6. A HVS score of all methods ranked the IF-OPT method (h) as the one providing most detail of the vascular network (for details see Table 1.). Scale bar in (a) corresponds to 1.8mm in (a)-(j). For the details of the methods see section 3.4.
) of the specimen shown in Fig. 6. The projection images for each data set had been processed with one of the following fusion methods: PCA (principal component analysis) fusing method, DWT (discrete wavelet transform) fusing method, SIDWT (shift invariant wavelet transform) fusing method, MP (morphology pyramid) fusing method, Laplacian fusing method, PCNN (pulse coupled neural network) fusing method, Average (arithmetic mean) fusing method, HDRMD (MATLAB direct high dynamic range) fusion method, HDRMN (MATLAB normalized intensity high dynamic range) fusion method and IF-OPT (proposed) fusing method. The participants were asked to rank the top five (from 1 to 5) images that display the most detailed view of the vascular tree in a mouse liver lobe stained for smooth muscle α-actin. For all images, the participants were blinded to the image enhancement method used.

In this survey, 90.5% of the participants scored IF-OPT as the best performing method (average rank 1.48), demonstrating that IF-OPT surpasses the other tested methods for image enhancement using this type of specimen. For details on the HVS score of all methods see Table 1

Table 1. Comprehensive data from HVS ranking of images in Fig. 8

table-icon
View This Table
| View All Tables
.

Note, for the PCNN, the used parameters were the ones shown in the original paper [14

14. Z. B. Wang and Y. Ma, “Medical image fusion using m-PCNN,” Inf. Fusion 9(2), 176–185 (2008). [CrossRef]

], which are as follows: β1=β2=β3=0.5; convolution core K=[0.10910.14090.10910.140900.14090.10910.14090.1091], M()=W()=Y[n1]K, where ⊗ denotes convolution operation, level factor σ = 1.0, time constant αT = 0.012, and normalized offset parameter VT = 4000. βk denotes the weighting factor of the kth channel while σ is the level factor to adjust the average level of internal activity.

Additionally, in the image fusion toolbox (ver 1.0 available from http://www.metapix.de/toolbox.htm), developed by Oliver Rockinger, Metapix, the settings for the wavelet and pyramid based methods are as follows: The 2D shift invariant discrete wavelet transform (SIDWT) used the wavelet filters ‘Haar’ (Daubechies family) and the discrete wavelet transform (DWT) used the wavelet filters 'db2’ (Daubechies family). Both transforms run in symmetric-padding (half-point) mode as their DWT extension mode. Parameters of these methods, in addition to the morphology pyramid and Laplacian pyramid algorithms are as follows: decomposition level 4th, coefficient selection rule for the highpass combination is ‘maximum value’ and coefficient selection rule for the lowpass combination is ‘average value’. The MATLAB “makehdr” built-in function was used which efficiently implements the algorithm of Reinhard et al. [38

38. E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, “High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting (The Morgan Kaufmann Series in Computer Graphics), (Morgan Kaufmann Publishers Inc., 2005).

].

5. Discussion

This work advocates the utility of IF-OPT for the analysis of structures with highly variable signal intensities. By assessing such structures (in our case epithelial or vascular networks) in different organs, we demonstrate the added value of the IF-OPT imaging approach for this purpose as applied to highly challenging specimens. Obviously, the possibility to assess this type of specimens is dependent on the dynamic range of the detector. In this report, the experiments were carried out on the only OPT scanner that currently has been made commercially available. It should be noted however that also when using a high-end camera with comparably superior dynamic range, in our case fitted to the original setup for OPT imaging described previously [1

1. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science 296(5567), 541–545 (2002). [CrossRef] [PubMed]

, 30

30. A. U. Eriksson, C. Svensson, A. Hörnblad, A. Cheddad, E. Kostromina, M. Eriksson, N. Norlin, A. Pileggi, J. Sharpe, F. Georgsson, T. Alanentalo, and U. Ahlgren, “Near infrared Optical projection tomography for assessments of β-cell mass distribution in diabetes research,” J. Vis. Exp. (71): e50238 (2013). [PubMed]

], the IF-OPT approach greatly benefits the results (see Fig. 9
Fig. 9 Effect of IF-OPT using an alternative camera. The images depict volume renderings of an adult C57BL/6 (Taconic) left lateral liver lobe (lobus sinister lateralis) labeled for smooth muscle α-actin. Images were acquired using an Andor iKon-M DU934N-BV 16bit CCD camera fitted to the OPT scanner setup described by Sharpe et al [1, 30]. This camera has a considerably higher dynamic range compared to the camera delivered with the Bioptonics 3001 scanner used in Figs. 6-8 of this report. (a) depicts a single, normal, exposure (8000ms) image and (b) an IF-OPT image comprised of three exposures/ projection, low (1500ms), normal (8000ms) and high (15000ms). Scalebar in (b) corresponds to 2mm in (a) and (b).
). In parallel to this work, a HDR based method, in which nine different exposures were used, was developed for transmission OPT of small specimen in a setup with a low bit depth camera (8bit) [8

8. P. Fei, Z. Yu, X. Wang, P. J. Lu, Y. Fu, Z. He, J. Xiong, and Y. Huang, “High dynamic range optical projection tomography (HDR-OPT),” Opt. Express 20(8), 8824–8836 (2012). [CrossRef] [PubMed]

]. In comparison, the IF-OPT approach proposed herein greatly reduces the complexity in data processing and does not require retrieval of the cameras response function. Whereas the IF-OPT approach is capable of addressing the issue of highly variable contrast intensities using only three exposures, the higher number of exposures used for each projection view in the HDR approach [8

8. P. Fei, Z. Yu, X. Wang, P. J. Lu, Y. Fu, Z. He, J. Xiong, and Y. Huang, “High dynamic range optical projection tomography (HDR-OPT),” Opt. Express 20(8), 8824–8836 (2012). [CrossRef] [PubMed]

] increase the demand for data storage capacity (and thereby the potential need for down sampling) and increase scanning times. Furthermore, for a fluorescent specimen, an increased number of exposures for each projection view also increase the potential of adverse effects due to photo bleaching. It should also be noted that in comparison with algorithms for HDRMN and HDRMD, tested here, IF-OPT delivers improved performance.

The proposed method consists of different techniques applied to the original projection data: Image Fusion (IF), histogram adjustment (CLAHE) and filtered back projection (FBP). In the case of IF-OPT, IF and CLAHE are applied as described in Eq. (1) to the projection data and the resulting fused images are then reconstructed using FBP. It could be argued that by individually manipulating the projection images prior to reconstruction we violate the basic assumptions for tomographic reconstruction by FBP. However, these assumptions are already violated in the original, and most common, implementation of OPT [1

1. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science 296(5567), 541–545 (2002). [CrossRef] [PubMed]

]. I.e., by using an implementation of the FBP algorithm, attenuation and not emission is assumed in the imaged volume. Despite of this known theoretical drawback, OPT has proven to be immensely useful in a large number of biological applications. It is important to note that in imaging based on biomarkers, especially when looking at anatomical data, it is most commonly sufficient to detect the presence or absence of the marker and it is not always essential to have a linear mapping between the perceived intensity of the marker and its actual concentration. Thus it is not critical that the back projection yields quantitatively accurate intensity values and consequently it is not critical to exactly meet the assumptions on which the FBP algorithm is built. In the case of IF-OPT, however, the main reason for not applying the contrast enhancement after tomographic reconstruction is computational complexity. Since performing CLAHE in full 3D would increase the complexity by an order of magnitude from O(n2) to O(n3) and since n typically is in the order of 1024 it is not trivial to execute this type, or similar, operations in the reconstructed volume on computers easily accessible to the general lab performing OPT. The implementation of an efficient 3D CLAHE that could be applied to data in the gigabyte scale is an open research question of its own and goes beyond the scope of this paper. In Fig. 10
Fig. 10 Comparison of alternative use of the components of the IF-OPT approach. The images depict volume renderings of an adult C57BL/6 (Taconic) left lateral liver lobe (lobus sinister lateralis) labeled for smooth muscle α-actin. The images illustrate: (a) “Normal” OPT (same as in Fig. 6(b)), (b) the effect of applying CLAHE to the projection images prior to reconstruction, (c) the effect of applying CLAHE to the 2D tomographic data of the fused projection images and (d) the effect of IF-OPT as described in section 3.2. For blinded HVS ranking of (a) - (d), see Table 2. Scale bar in (a) corresponds to 1.8mm in (a)-(d).
, the effect of applying CLAHE at different stages in the imaging process is shown. Note that in generating Fig. 10(c), CLAHE was due to computational complexity applied to 2D-data that was stacked to generate the 3D volume, thus it is just an approximation of applying CLAHE to the reconstructed volume. It is however clear from the blinded HVS scoring (see Table 2

Table 2. Comprehensive data from HVS ranking of images in Fig. 10.

table-icon
View This Table
| View All Tables
.) where expert readers were asked to rank how well anatomical structures could be seen in different volumes that IF-OPT turned out to be superior to any other order of applying the techniques of fusion, histogram adjustment and tomographic reconstruction. In a recent study [39

39. A. Hörnblad, A. Cheddad, and U. Ahlgren, “An improved protocol for optical projection tomography imaging reveals lobular heterogeneities in pancreatic islet and β-cell mass distribution,” Islets 3(4), 204–208 (2011). [CrossRef] [PubMed]

], we described a protocol whereby CLAHE was utilized to drastically improve the quantitative and morphometric assessments of pancreatic islet and β-cell mass distributions. Whilst both this approach and the method presented here exploit CLAHE, there are as outlined above several unique features of the IF-OPT technique that distinguish it from the pure CLAHE method.

Since the suggested IF-OPT approach alters first order statistics, studies that are concerned with intensity-level based measurements (e.g. protein expression levels) should keep this in mind when dealing with image fusion in general. It should further be emphasized that the IF-OPT approach demands that different exposure scans are properly aligned or else a post-registration alignment must be carried out. Post-registration alignment apart, one possibility to deal with imprecisions in the mechanical setup, and thereby imperfect image registration between exposures, would be to take multiple exposures at each rotational position. This approach however does not cope with the issue of system imperfections between projections. In our setups, where the different exposures are obtained after a full rotational cycle, we have verified a typical pixel shift between rotations in the sub-pixel domain. As mentioned above, an issue related to the mode of image registration is the topic of photo bleaching. As demonstrated by Henkelman and associates, normalizing each image registration according to the rate of signal decay could contribute to improve the results in OPT imaging [40

40. J. R. Walls, J. G. Sled, J. Sharpe, and R. M. Henkelman, “Correction of artefacts in optical projection tomography,” Phys. Med. Biol. 50(19), 4645–4665 (2005). [CrossRef] [PubMed]

]. In this respect, performing the image registration for each exposure of a projection between each rotational cycle is likely to reduce the difference in signal intensity between the first and the last projection, all exposures included. With these considerations accounted for, IF-OPT provides a robust platform for further exploration into a range of research fields. In addition to the organ systems assessed in this report, such studies may include assessments of high-low signal intensity structures in a range of tissues in normal and genetically modified animals or pathological biopsies. Examples of such research areas are, but not limited to, secondary vascular disorders in experimental diabetes research or neuronal pathways in neuronal disorders. The technique may further be explored for fusion or co-registration with other source modalities (e.g., CT, PET, MRI). Altogether, the described method for image fusion of OPT projection data may open up several new research avenues for the OPT imaging technique.

Acknowledgments

We are grateful to Drs. J. Gilthorpe and J. Sharpe and members of our laboratory for helpful discussions and comments on the manuscript. This work was supported by the Swedish research council, the European Commission (FP-7, Grant agreement no.: CP-IP 228933-2), the Kempe Foundations and by grants from Umeå University to U.A.

References and links

1.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science 296(5567), 541–545 (2002). [CrossRef] [PubMed]

2.

R. C. Gonzalez and R. E. Woods, Digital Image Processing (Pearson/Prentice Hall, 2008).

3.

Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proc. IEEE 87(8), 1315–1326 (1999). [CrossRef]

4.

G. Simone, A. Farina, F. C. Morabito, S. B. Serpico, and L. Bruzzone, “Image fusion techniques for remote sensing applications,” Inf. Fusion 3(1), 3–15 (2002). [CrossRef]

5.

G. K. Matsopoulos, S. Marshall, and J. N. H. Brunt, “Multiresolution morphological fusion of MR and CT images of the human brain,” IEEE Proc.Vis Image Sign 141(3), 137–142 (1994).

6.

S. Cheng, J. He, and Z. Lv, “Medical image of PET/CT weighted fusion based on wavelet transform,” in Proceedings of IEEE Conference on Bioinformatics and Biomedical Engineering, (IEEE, 2008), pp. 2523–2525. [CrossRef]

7.

S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models,” Inf. Fusion 11(2), 114–123 (2010). [CrossRef]

8.

P. Fei, Z. Yu, X. Wang, P. J. Lu, Y. Fu, Z. He, J. Xiong, and Y. Huang, “High dynamic range optical projection tomography (HDR-OPT),” Opt. Express 20(8), 8824–8836 (2012). [CrossRef] [PubMed]

9.

A. Vavilin, K. Deb, and K. H. Jo,N. García-Pedrajas, ed., “Fast HDR image generation technique based on exposure blending,” in Proceedings of Trends in Applied Intelligent Systems:23rd International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, N. García-Pedrajas, ed. (Springer, 2010), pp. 379–388. [CrossRef]

10.

A. O. Akyuz and E. Reinhard, “Noise reduction in high dynamic range imaging,” J Vis Commun Image R 18(5), 366–376 (2007). [CrossRef]

11.

M. Ghantous, S. Ghosh, and M. Bayoumi, “A gradient-based hybrid image fusion scheme using object extraction,” in Proceedings of IEEE Conference on Image Processing, (IEEE, 2008), pp. 1300–1303. [CrossRef]

12.

Y. Yang, D. S. Park, S. Huang, and N. Rao, “Medical image fusion via an effective wavelet-based approach,” EURASIP J. Adv. Signal Process. 2010, 1–14 (2010). [CrossRef]

13.

A. A. Goshtasby, “Fusion of multi-exposure images,” Image Vis. Comput. 23(6), 611–618 (2005). [CrossRef]

14.

Z. B. Wang and Y. Ma, “Medical image fusion using m-PCNN,” Inf. Fusion 9(2), 176–185 (2008). [CrossRef]

15.

Q. Xiao-Bo, Y. Jing-Wen, X. Hong-Zhi, and Z. Zi-Qian, “Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain,” Acta Automatica Sinica 34, 1508–1514 (2008).

16.

N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion 8(2), 131–142 (2007). [CrossRef]

17.

T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: a simple and practical alternative to high dynamic range photograph,” Comput. Graph. Forum 28(1), 161–171 (2009). [CrossRef]

18.

A. Toet, “Hierarchical image fusion,” Mach. Vis. Appl. 3(1), 1–11 (1990). [CrossRef]

19.

P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in Proceedings of IEEE Conference on Computer Vision. (IEEE, 1993), pp. 173–182.

20.

L. J. Chipman, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proceedings of IEEE Conference on Image Processing. (IEEE, 1995), pp. 248–251. [CrossRef]

21.

L. Hui, B. S. Manjunath, and S. K. Mitra, “Multi-sensor image fusion using the wavelet transform,” in Proceedings of IEEE Conference on Image Processing, (IEEE, 1994), pp. 51–55. [CrossRef]

22.

J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Inf. Fusion 8(2), 119–130 (2007). [CrossRef]

23.

S. Nikolov, P. Hill, D. Bull, and N. C. Jah, “Wavelets for image fusion,” in Wavelets in Signal and Image Analysis: From Theory to Practice, A. A. Petrosian and F. G. Meyer, eds. (Springer, 2001), pp. 213–244.

24.

D. Looney and D. P. Mandic, “Multi-scale image fusion using complex extensions of EMD,” IEEE Trans. Signal Process. 57(4), 1626–1630 (2009). [CrossRef]

25.

D. Mandic, M. Golz, A. Kuh, D. Obradovic, T. Tanaka, G. Souretis, W. Leong, D. Looney, and M. Hulle, “Complex Empirical Mode Decomposition for Multichannel Information Fusion,” in Signal Processing Techniques for Knowledge Extraction and Information Fusion (Springer, 2008), pp. 243–260.

26.

P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, (ACM Press/Addison-Wesley Publishing Co, 1997) pp. 369–378. [CrossRef]

27.

K. Stark, S. Vainio, G. Vassileva, and A. P. McMahon, “Epithelial transformation of metanephric mesenchyme in the developing kidney regulated by Wnt-4,” Nature 372(6507), 679–683 (1994). [CrossRef] [PubMed]

28.

T. Alanentalo, A. Asayesh, H. Morrison, C. E. Lorén, D. Holmberg, J. Sharpe, and U. Ahlgren, “Tomographic molecular imaging and 3D quantification within adult mouse organs,” Nat. Methods 4(1), 31–33 (2007). [CrossRef] [PubMed]

29.

A. Cheddad, C. Svensson, J. Sharpe, F. Georgsson, and U. Ahlgren, “Image processing assisted algorithms for optical projection tomography,” IEEE Trans. Med. Imaging 31(1), 1–15 (2012). [CrossRef] [PubMed]

30.

A. U. Eriksson, C. Svensson, A. Hörnblad, A. Cheddad, E. Kostromina, M. Eriksson, N. Norlin, A. Pileggi, J. Sharpe, F. Georgsson, T. Alanentalo, and U. Ahlgren, “Near infrared Optical projection tomography for assessments of β-cell mass distribution in diabetes research,” J. Vis. Exp. (71): e50238 (2013). [PubMed]

31.

S. G. Nikolov, D. R. Bull, C. N. Canagarajah, M. Halliwell, and P. N. T. Wells, “Image fusion using a 3-D wavelet transform,” in Proceedings of IEEE Conference on Image Processing And Its Applications.(IEEE, 1999), pp. 235–239. [CrossRef]

32.

E. D. Pisano, S. Zong, B. M. Hemminger, M. DeLuca, R. E. Johnston, K. Muller, M. P. Braeuning, and S. M. Pizer, “Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms,” J. Digit. Imaging 11(4), 193–200 (1998). [CrossRef] [PubMed]

33.

S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput Vision Graph 39(3), 355–368 (1987). [CrossRef]

34.

S. M. Pizer, J. B. Zimmerman, and E. V. Staab, “Adaptive grey level assignment in CT scan display,” J. Comput. Assist. Tomogr. 8(2), 300–305 (1984). [PubMed]

35.

K. Zuiderveld, “Contrast Limited Adaptive Histogram Equalization,” in Graphic Gems IV, S. H. Paul, ed. (Academic Press Professional, Inc., 1994), pp. 474–485.

36.

C.-J. Du and D.-W. Sun, “Retrospective shading correction of confocal laser canning microscopy beef images for three-dimensional visualization,” Food Bioprocess Tech 2(2), 167–176 (2009). [CrossRef]

37.

L. Choudur, U. Dayal, C. Gupta, and R. Swaminatha, “On wavelet compression and cardinality estimation of enterprise data” (2010), retrieved http://www.hpl.hp.com/techreports/2010/HPL-2010-132.pdf.

38.

E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, “High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting (The Morgan Kaufmann Series in Computer Graphics), (Morgan Kaufmann Publishers Inc., 2005).

39.

A. Hörnblad, A. Cheddad, and U. Ahlgren, “An improved protocol for optical projection tomography imaging reveals lobular heterogeneities in pancreatic islet and β-cell mass distribution,” Islets 3(4), 204–208 (2011). [CrossRef] [PubMed]

40.

J. R. Walls, J. G. Sled, J. Sharpe, and R. M. Henkelman, “Correction of artefacts in optical projection tomography,” Phys. Med. Biol. 50(19), 4645–4665 (2005). [CrossRef] [PubMed]

OCIS Codes
(100.0100) Image processing : Image processing
(100.2980) Image processing : Image enhancement
(170.0170) Medical optics and biotechnology : Medical optics and biotechnology
(170.3880) Medical optics and biotechnology : Medical and biological imaging

ToC Category:
Medical Optics and Biotechnology

History
Original Manuscript: May 10, 2013
Manuscript Accepted: May 16, 2013
Published: July 2, 2013

Virtual Issues
Vol. 8, Iss. 8 Virtual Journal for Biomedical Optics

Citation
Abbas Cheddad, Christoffer Nord, Andreas Hörnblad, Renata Prunskaite-Hyyryläinen, Maria Eriksson, Fredrik Georgsson, Seppo J. Vainio, and Ulf Ahlgren, "Improving signal detection in emission optical projection tomography via single source multi-exposure image fusion," Opt. Express 21, 16584-16604 (2013)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-21-14-16584


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science296(5567), 541–545 (2002). [CrossRef] [PubMed]
  2. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Pearson/Prentice Hall, 2008).
  3. Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proc. IEEE87(8), 1315–1326 (1999). [CrossRef]
  4. G. Simone, A. Farina, F. C. Morabito, S. B. Serpico, and L. Bruzzone, “Image fusion techniques for remote sensing applications,” Inf. Fusion3(1), 3–15 (2002). [CrossRef]
  5. G. K. Matsopoulos, S. Marshall, and J. N. H. Brunt, “Multiresolution morphological fusion of MR and CT images of the human brain,” IEEE Proc.Vis Image Sign141(3), 137–142 (1994).
  6. S. Cheng, J. He, and Z. Lv, “Medical image of PET/CT weighted fusion based on wavelet transform,” in Proceedings of IEEE Conference on Bioinformatics and Biomedical Engineering, (IEEE, 2008), pp. 2523–2525. [CrossRef]
  7. S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models,” Inf. Fusion11(2), 114–123 (2010). [CrossRef]
  8. P. Fei, Z. Yu, X. Wang, P. J. Lu, Y. Fu, Z. He, J. Xiong, and Y. Huang, “High dynamic range optical projection tomography (HDR-OPT),” Opt. Express20(8), 8824–8836 (2012). [CrossRef] [PubMed]
  9. A. Vavilin, K. Deb, and K. H. Jo,N. García-Pedrajas, ed., “Fast HDR image generation technique based on exposure blending,” in Proceedings of Trends in Applied Intelligent Systems:23rd International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, N. García-Pedrajas, ed. (Springer, 2010), pp. 379–388. [CrossRef]
  10. A. O. Akyuz and E. Reinhard, “Noise reduction in high dynamic range imaging,” J Vis Commun Image R18(5), 366–376 (2007). [CrossRef]
  11. M. Ghantous, S. Ghosh, and M. Bayoumi, “A gradient-based hybrid image fusion scheme using object extraction,” in Proceedings of IEEE Conference on Image Processing, (IEEE, 2008), pp. 1300–1303. [CrossRef]
  12. Y. Yang, D. S. Park, S. Huang, and N. Rao, “Medical image fusion via an effective wavelet-based approach,” EURASIP J. Adv. Signal Process.2010, 1–14 (2010). [CrossRef]
  13. A. A. Goshtasby, “Fusion of multi-exposure images,” Image Vis. Comput.23(6), 611–618 (2005). [CrossRef]
  14. Z. B. Wang and Y. Ma, “Medical image fusion using m-PCNN,” Inf. Fusion9(2), 176–185 (2008). [CrossRef]
  15. Q. Xiao-Bo, Y. Jing-Wen, X. Hong-Zhi, and Z. Zi-Qian, “Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain,” Acta Automatica Sinica34, 1508–1514 (2008).
  16. N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion8(2), 131–142 (2007). [CrossRef]
  17. T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: a simple and practical alternative to high dynamic range photograph,” Comput. Graph. Forum28(1), 161–171 (2009). [CrossRef]
  18. A. Toet, “Hierarchical image fusion,” Mach. Vis. Appl.3(1), 1–11 (1990). [CrossRef]
  19. P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in Proceedings of IEEE Conference on Computer Vision. (IEEE, 1993), pp. 173–182.
  20. L. J. Chipman, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proceedings of IEEE Conference on Image Processing. (IEEE, 1995), pp. 248–251. [CrossRef]
  21. L. Hui, B. S. Manjunath, and S. K. Mitra, “Multi-sensor image fusion using the wavelet transform,” in Proceedings of IEEE Conference on Image Processing, (IEEE, 1994), pp. 51–55. [CrossRef]
  22. J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Inf. Fusion8(2), 119–130 (2007). [CrossRef]
  23. S. Nikolov, P. Hill, D. Bull, and N. C. Jah, “Wavelets for image fusion,” in Wavelets in Signal and Image Analysis: From Theory to Practice, A. A. Petrosian and F. G. Meyer, eds. (Springer, 2001), pp. 213–244.
  24. D. Looney and D. P. Mandic, “Multi-scale image fusion using complex extensions of EMD,” IEEE Trans. Signal Process.57(4), 1626–1630 (2009). [CrossRef]
  25. D. Mandic, M. Golz, A. Kuh, D. Obradovic, T. Tanaka, G. Souretis, W. Leong, D. Looney, and M. Hulle, “Complex Empirical Mode Decomposition for Multichannel Information Fusion,” in Signal Processing Techniques for Knowledge Extraction and Information Fusion (Springer, 2008), pp. 243–260.
  26. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, (ACM Press/Addison-Wesley Publishing Co, 1997) pp. 369–378. [CrossRef]
  27. K. Stark, S. Vainio, G. Vassileva, and A. P. McMahon, “Epithelial transformation of metanephric mesenchyme in the developing kidney regulated by Wnt-4,” Nature372(6507), 679–683 (1994). [CrossRef] [PubMed]
  28. T. Alanentalo, A. Asayesh, H. Morrison, C. E. Lorén, D. Holmberg, J. Sharpe, and U. Ahlgren, “Tomographic molecular imaging and 3D quantification within adult mouse organs,” Nat. Methods4(1), 31–33 (2007). [CrossRef] [PubMed]
  29. A. Cheddad, C. Svensson, J. Sharpe, F. Georgsson, and U. Ahlgren, “Image processing assisted algorithms for optical projection tomography,” IEEE Trans. Med. Imaging31(1), 1–15 (2012). [CrossRef] [PubMed]
  30. A. U. Eriksson, C. Svensson, A. Hörnblad, A. Cheddad, E. Kostromina, M. Eriksson, N. Norlin, A. Pileggi, J. Sharpe, F. Georgsson, T. Alanentalo, and U. Ahlgren, “Near infrared Optical projection tomography for assessments of β-cell mass distribution in diabetes research,” J. Vis. Exp. (71): e50238 (2013). [PubMed]
  31. S. G. Nikolov, D. R. Bull, C. N. Canagarajah, M. Halliwell, and P. N. T. Wells, “Image fusion using a 3-D wavelet transform,” in Proceedings of IEEE Conference on Image Processing And Its Applications.(IEEE, 1999), pp. 235–239. [CrossRef]
  32. E. D. Pisano, S. Zong, B. M. Hemminger, M. DeLuca, R. E. Johnston, K. Muller, M. P. Braeuning, and S. M. Pizer, “Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms,” J. Digit. Imaging11(4), 193–200 (1998). [CrossRef] [PubMed]
  33. S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput Vision Graph39(3), 355–368 (1987). [CrossRef]
  34. S. M. Pizer, J. B. Zimmerman, and E. V. Staab, “Adaptive grey level assignment in CT scan display,” J. Comput. Assist. Tomogr.8(2), 300–305 (1984). [PubMed]
  35. K. Zuiderveld, “Contrast Limited Adaptive Histogram Equalization,” in Graphic Gems IV, S. H. Paul, ed. (Academic Press Professional, Inc., 1994), pp. 474–485.
  36. C.-J. Du and D.-W. Sun, “Retrospective shading correction of confocal laser canning microscopy beef images for three-dimensional visualization,” Food Bioprocess Tech2(2), 167–176 (2009). [CrossRef]
  37. L. Choudur, U. Dayal, C. Gupta, and R. Swaminatha, “On wavelet compression and cardinality estimation of enterprise data” (2010), retrieved http://www.hpl.hp.com/techreports/2010/HPL-2010-132.pdf .
  38. E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, “High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting (The Morgan Kaufmann Series in Computer Graphics), (Morgan Kaufmann Publishers Inc., 2005).
  39. A. Hörnblad, A. Cheddad, and U. Ahlgren, “An improved protocol for optical projection tomography imaging reveals lobular heterogeneities in pancreatic islet and β-cell mass distribution,” Islets3(4), 204–208 (2011). [CrossRef] [PubMed]
  40. J. R. Walls, J. G. Sled, J. Sharpe, and R. M. Henkelman, “Correction of artefacts in optical projection tomography,” Phys. Med. Biol.50(19), 4645–4665 (2005). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited