OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 10 — May. 9, 2011
  • pp: 9315–9329
« Show journal navigation

Multispectral image enhancement for effective visualization

Noriaki Hashimoto, Yuri Murakami, Pinky A. Bautista, Masahiro Yamaguchi, Takashi Obi, Nagaaki Ohyama, Kuniaki Uto, and Yukio Kosugi  »View Author Affiliations


Optics Express, Vol. 19, Issue 10, pp. 9315-9329 (2011)
http://dx.doi.org/10.1364/OE.19.009315


View Full Text Article

Acrobat PDF (2256 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Color enhancement of multispectral images is useful to visualize the image’s spectral features. Previously, a color enhancement method, which enhances the feature of a specified spectral band without changing the average color distribution, was proposed. However, sometimes the enhanced features are indiscernible or invisible, especially when the enhanced spectrum lies outside the visible range. In this paper, we extended the conventional method for more effective visualization of the spectral features both in visible range and non-visible range. In the proposed method, the user specifies both the spectral band for extracting the spectral feature and the color for visualization respectively, so that the spectral feature is enhanced with arbitrary color. The proposed color enhancement method was applied to different types of multispectral images where its effectiveness to visualize spectral features was verified.

© 2011 OSA

1. Introduction

Multispectral imaging uses more than 3 spectral filters to capture images that include spectral information which is useful for remote sensing [1

1. Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters. I. A semianalytical model,” Appl. Opt. 37, 6329–6338 (1998). [CrossRef]

3

3. B.-C. Gao, M. J. Montes, Z. Ahmad, and C. O. Davis, “Atmospheric correction algorithm for hyperspectral remote sensing of ocean color from space,” Appl. Opt. 39, 887–896 (2000). [CrossRef]

], color reproduction [4

4. M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, and N. Ohyama, “Color image reproduction based on the multispectral and multiprimary imaging: Experimental evaluation,” Proc. SPIE 4663, 15–26 (2002). [CrossRef]

, 5

5. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002). [CrossRef]

], image analysis [6

6. A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. I. Decorrelation and HSI contrast stretches,” Remote Sens. Environ. 20, 209–235 (1986). [CrossRef]

9

9. M. Yamaguchi, M. Mitsui, Y. Murakami, H. Fukuda, N. Ohyama, and Y. Kubota, “Multispectral color imaging for dermatology: application in inflammatory and immunologic diseases,” in Proceedings of 13th Color Imaging Conference (Society for Imaging Science and Technology/Society for Information Display, 2005), pp. 52–58.

] and so on. High fidelity color reproduction [4

4. M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, and N. Ohyama, “Color image reproduction based on the multispectral and multiprimary imaging: Experimental evaluation,” Proc. SPIE 4663, 15–26 (2002). [CrossRef]

, 5

5. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002). [CrossRef]

], which is difficult to accomplish with conventional RGB systems due to the limited information contained in RGB images, is made possible by using multispectral images of visible spectral range. Moreover, spectral color features that are invisible to the human eyes can be also captured and employed for object detection, recognition, or quantification. Color enhancement is an effective tool to explore the spectral features contained in multispectral images. For example, Gillespie et al. [6

6. A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. I. Decorrelation and HSI contrast stretches,” Remote Sens. Environ. 20, 209–235 (1986). [CrossRef]

], Ward et al. [7

7. J. Ward, V. Magnotta, N. C. Andreasen, W. Ooteman, P. Nopoulos, and R. Pierson, “Color enhancement of multispectral MR images: Improving the visualization of subcortical structures,” J. Comput. Assist. Tomogr. 25, 942–949 (2001). [CrossRef] [PubMed]

] and others proposed color enhancement methods for multispectral images. In most cases, the enhancement results are pseudo-color images in which the natural colors of the objects are not preserved. However, the natural color of the objects is also important to interpret the spectral features when the multispectral image includes the visible spectral range.

Mitsui et al. [8

8. M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral image using the Karhunen-Loeve transform,” Opt. Rev. 12, 69–75 (2005). [CrossRef]

,9

9. M. Yamaguchi, M. Mitsui, Y. Murakami, H. Fukuda, N. Ohyama, and Y. Kubota, “Multispectral color imaging for dermatology: application in inflammatory and immunologic diseases,” in Proceedings of 13th Color Imaging Conference (Society for Imaging Science and Technology/Society for Information Display, 2005), pp. 52–58.

] proposed a multispectral color enhancement method in which the enhanced results are overlaid to the original natural-colored images. In this method, the differences between the original multispectral image and its approximation by a few principal components at specified spectral bands are amplified. Then, the indiscernible spectral feature in the multispectral image is visualized without changing the average color distribution. However, sometimes the enhanced feature could not be observed, especially when the specified spectral band is not visually significant, for example, near ultraviolet or infrared. Also when an image has a large number of spectral bands, the enhanced results are not clear.

In this paper, we extended the conventional method [8

8. M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral image using the Karhunen-Loeve transform,” Opt. Rev. 12, 69–75 (2005). [CrossRef]

] by modifying the visualization algorithm to effectively visualize the enhanced spectral features of a multispectral image, which could not be visualized well in the conventional method. In the proposed method, the user can specify the spectral band to extract the spectral feature and the color for visualization independently so that the desired spectral feature is enhanced with the specified color. This allows the enhanced spectral features to be visualized clearly even if the enhanced feature is in the invisible range or the image has a large number of spectral bands such like hyperspectral images. For such purpose, we present three methods to determine the color for visualization. In the experiment, we applied the proposed methods to various types of multispectral images such as a skin image, a microscopic image and a rice paddy image, and we have verified that the proposed method could effectively enhance the indiscernible spectral feature in the multispectral images.

2. Method

2.1. Multispectral color enhancement

The color enhancement presented in this paper is mainly based on the method proposed by Mitsui et al. [8

8. M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral image using the Karhunen-Loeve transform,” Opt. Rev. 12, 69–75 (2005). [CrossRef]

]. This method enhances the color difference from dominant Karhunen-Loeve (KL) component without changing the color determined by the dominant component. The algorithm of the color enhancement procedure is shown in Fig. 1. First, a set of spectral data is extracted from the image in order to derive the dominant component. The data can be extracted from the entire image, or from part of the image (e.g. region of non-interest), depending on the requirement of the application. Then, a covariance matrix is derived from the extracted spectral samples to calculate for the KL basis vectors. The first few KL vectors are used to estimate the dominant component of the image.

Fig. 1 The flow of color enhancement.

In the N-band multispectral image, the enhanced signal value vector for j-th pixel g ej(N-dimensional vector) is represented as,
gej=W(gjsj)+gj,
(1)
where W is an N × N matrix for the enhancement, g j is the original multispectral signal value of j-th pixel and s j is the signal value estimated with dominant KL vectors, and is written as,
sj=i=1mαijui+g¯,
(2)
where m is the number of basis vectors used in the estimation (m < N), u i is i-th KL basis vector (N-dimensional vector) and is the average vector of the set of pixel data for KL basis vectors;g js j is considered to be a residual component. Furthermore, αij is i-th KL coefficient for j-th pixel expressed as,
αij=uiT(gjg¯).
(3)
The matrix W determines the result of the enhancement. In Ref. 8, the element in p-th row and q-th column [W]pq is given by,
[W]pq={kp=q=n0otherwise,
(4)
where n is an index for the enhanced band and k is a coefficient to amplify the residual component. The amplified residual in the n-th band, is added to the original signal value in the n-th band according to Eq. (1). In addition, from the relationship of Eqs. (1), (2) and (3) we have,
gej=[W(EUUT)+E]gjW(EUUT)g¯,
(5)
where E is an N × N identity matrix, and U is an N × N matrix whose column vectors are represented by the KL basis vectors and the vector at its q-th column is expressed as,
[U]q={uqqm0otherwise.
(6)
In Eq. (5), the second term in the right-hand side is a constant vector. Thus, the spectral enhancement is easily derived by matrix multiplications and additions.

The enhanced multispectral image g ej is transformed into the spectral reflectance or transmittance by spectral estimation technique [10

10. Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001). [CrossRef]

], and the color image is generated by using a color-matching function (CMF) such as CIE 1931 XYZ CMF, an illumination spectrum and a matrix for XYZ to RGB transform.

2.2. Modification of weighting factor matrix

In order to overcome the limitation of the conventional method, we extended the definition of the matrix W in Eq. (4) [11

11. P. A. Bautista, T. Abe, M. Yamaguchi, and N. Ohyama, “Multispectral image enhancement for H&E stained pathological tissue specimens,” Proc. SPIE 6918, 691836 (2008). [CrossRef]

], such that the band at which to extract the spectral features and the color for visualization can be specified independently. In this paper, the modified version of the matrix W is called weighting factor matrix, whose q-th column vector is designed as follows;
[W]q={k(gdga)q=n0otherwise,
(7)
where g d is the spectral data of the target color to be visualized and g a is the spectral data of the background in the image. According to Eqs. (1) and (7), the spectrum (g dg a) amplified by the residual component at each pixel is added to the original signal value g j. Setting the proper coefficient k allows the color of the enhanced region to change towards the target color determined by g d [Eq. (1)]. The spectral data of the background color, g a, can be the average spectral data of the entire image.

There are several approaches to determine g d, which is the spectral data of the target color, and we show in the following three possible methods.

Method I. In the first method, the relationship between the wavelength of the multispectral image and the color for the visualization is defined. Then the spectrum of the color assigned to the n-th band is derived by spectral estimation technique, and is used as g d when n-th band is specified for the enhancement. For example, hue between blue through red is assigned to the band between the shortest and the longest wavelengths of the multispectral image. In this method, the spectrum g d is calculated by employing a spectrum estimation technique as follows,
gd=HC+[XdYdZd],
(8)
where C + is the pseudo-inverse matrix of CMF and the tristimulus value (Xn,Yn, Zn), which corresponds to the color for enhancement of n-th band, is used as (X d,Y d, Z d). H is the system matrix which fulfills the relationship between pixel signal values g and spectral data f,
g=Hf.
(9)

Method II. In the second method, arbitrary color or spectrum is specified based on a user’s intent. A user chooses the color for visualization with a tool like a color picker, then the spectrum corresponding to the chosen color is estimated using Eq. (8). In this case, (X d,Y d, Z d) is the tristimulus value transformed from the RGB vector of the color selected by user. If a user desires the color or the spectrum of a physical object as the enhanced result, the spectrum of the target object can be selected from a spectral image with a spectrum-picker tool.

Method III. Hue is a parameter in uniform color spaces such as HSV, HLS or CIE L*C*h, and the opposite hue in such color spaces means perceptual inverse. Using this feature, the spectrum for visualization, g d, can be determined from the hue distribution of an image. This method sets, g d, automatically using the average hue of an image, and it might be effective when pixels’ hues in the image are similar. The spectrum is calculated using L*¯, a*¯ and b*¯ which are the average values of L *, a * and b * in the entire image. The color which has opposite hue is represented as,
ad*=a*¯,bd*=b*¯,
(10)
and the spectrum g d is estimated with the tristimulus value transformed from L*¯, ad* and bd*. It might be more effective to change the luminance L * in some cases.

3. Experiment

In the experiment, we applied the proposed color enhancement method to the multispectral images of a human skin captured by a filter-wheel multispectral camera [9

9. M. Yamaguchi, M. Mitsui, Y. Murakami, H. Fukuda, N. Ohyama, and Y. Kubota, “Multispectral color imaging for dermatology: application in inflammatory and immunologic diseases,” in Proceedings of 13th Color Imaging Conference (Society for Imaging Science and Technology/Society for Information Display, 2005), pp. 52–58.

], a pathological slide captured by a multispectral microscope [11

11. P. A. Bautista, T. Abe, M. Yamaguchi, and N. Ohyama, “Multispectral image enhancement for H&E stained pathological tissue specimens,” Proc. SPIE 6918, 691836 (2008). [CrossRef]

] and a rice paddy image by hyperspectral imager mounted on a cargo crane [12

12. N. Kosaka, K. Uto, and Y. Kosugi, “ICA-aided mixed-pixel analysis of hyperspectral data in agricultural land,” IEEE Trans. Geosci. Remote Sens. 2, 220–224 (2005). [CrossRef]

].

3.1. Application to a skin image

Fig. 2 The multispectral image of a human skin.

Table 1. Center Wavelength and Bandwidth of Each Spectral Band of the Multispectral Camera

table-icon
View This Table
| View All Tables
Fig. 3 The results of color enhancement for a skin image with the conventional method using three basis (k = 30). (a) 445 nm, (b) 545 nm, (c) 600 nm and (d) 710 nm are enhanced.
Fig. 4 The flow of the color mapping method to each spectral band.

The results of enhancing the skin image with the proposed method are shown in Fig. 5. In the results, the spectral features in 445 nm, 545 nm and 600 nm which were also enhanced by the conventional method as shown in Fig. 3, are visualized. Additionally, the spectral feature in 710 nm which were not visible with the conventional method was successfully enhanced and the structure of the vein is clearly observed. This result showed that the proposed method could visualize the spectral feature even in the invisible range. The artifacts on the edge of the fingers resulted from the motion of the object during the image capture using the filter-wheel multispectral camera.

Fig. 5 The results of color enhancement for a skin image with the proposed method using three basis (k = 30). (a) 445 nm, (b) 545 nm, (c) 600 nm and (d) 710 nm are enhanced.

Moreover, we evaluated these methods numerically by comparing the color differences between the normal skin regions and vein regions in the original image, the enhanced images with the conventional method and the proposed method when the 16th band is enhanced. The average CIE L*a*b* color differences between the normal skin regions and the vein regions are shown in Table 2. In the conventional method, the color difference between the two regions is almost the same as the original image. And the color differences in these results arise mainly from the luminance differences. However, in the proposed method, the color difference increases by comparing to that of the original image, especially Δa * is greatly changed. This indicates that the proposed method can enhance the image more effectively.

Table 2. The Color Differences Between the Normal Skin Region and the Vein Region in Enhancing 16th Band

table-icon
View This Table
| View All Tables

Scribner et al. [13

13. D. Scribner, P. Warren, J. Schuler, M. Satyshur, and M. Kruer, “Infrared color vision: an approach to sensor fusion,” Opt. Photon. News 9, 27–32 (1998). [CrossRef]

, 14

14. D. Scribner, P. Warren, and J. Schuler, “Extending color vision methods to bands beyond the visible,” Machine Vision Appl. 11, 306–312 (2000). [CrossRef]

], Vilaseca et al. [15

15. M. Vilaseca, J. Pujol, M. Arjona, and F. M. Martínez-Verdú, “Color visualization system for near-infrared multispectral images,” J. Imaging Sci. Technol. 49, 246–255 (2005).

], and Jacobson et al. [16

16. N. P. Jacobson and M. R. Gupta, “Design goals and solutions for display of hyperspectral images,” IEEE Trans. Geosci. Remote Sens. 43, 2684–2692 (2005). [CrossRef]

] have discussed the visualization of spectral features in the invisible range, but their results were mostly pseudo-colored images. Our enhancement method can keep the natural color of the background in the image, which could make it easier to see.

3.2. Application to a pathological image

In the application to the pathological image, we considered enhancing the fiber region in the 16-band H&E (Hematoxylin-Eosin) stained liver-tissue specimen image captured using a multispectral microscope [11

11. P. A. Bautista, T. Abe, M. Yamaguchi, and N. Ohyama, “Multispectral image enhancement for H&E stained pathological tissue specimens,” Proc. SPIE 6918, 691836 (2008). [CrossRef]

] which has the spectral specification shown in Table 3. The fiber region is hardly differentiated in the H&E stained image shown in Fig. 6(a), hence MT (Masson-Trichrome) staining technique is normally used to see the fiber region as shown in Fig. 6(b). It has been reported that spectral imaging provides information for discriminating the fiber region in an H&E stained image [17

17. P. A. Bautista, T. Abe, M. Yamaguchi, Y. Yagi, and N. Ohyama, “Digital staining for multispectral images of pathological tissue specimens based on combined classification of spectral transmittance,” Comput. Med. Imaging Graph. 29, 649–657 (2005). [CrossRef] [PubMed]

]. In this experiment, we applied color enhancement to the H&E stained image to clearly visualize the fiber region where the spectrum g d for the visualization was determined from the color of the MT stained fiber region according to method II. The size of the images used in the experiment is 2048 × 2048 pixels. 400 spectral transmittance samples, each of which is the pooled average of the pixels transmittance within a 5 × 5 pixel ROI, were obtained for the different tissue components such as the nucleus, cytoplasm, red blood cells, except the fiber, to generate the KL vectors for enhancement. The average of the spectral data was used as the background spectrum g a. The spectrum of the fiber region in MT stained specimen shown in Fig. 7(a) was employed for the spectrum g d for the visualization.

Fig. 6 The multispectral images of the liver tissue specimens. (a) The H&E stained tissue specimen. (b) The MT stained specimen of serial section.
Fig. 7 Spectral data. (a) Average spectral transmittances of the fiber regions in an H&E and an MT stained liver-tissue image. (b) Average residuals of the different tissue components found in the H&E liver-tissue image. Each plot represents the average of 100 samples.

Table 3. Center Wavelength and Bandwidth of Each Spectral Band of the Multispectral Camera for Microscope

table-icon
View This Table
| View All Tables

Here, color enhancement method was implemented in spectral transmittance space to remove non-uniformity in illumination. The spectral transmittance is calculated as follows,
t(λ)=i(λ)ig(λ),
(12)
where i(λ) is the signal value of tissue and i g(λ) is that of glass. Figure 7(b) shows the average residual component (g js j) for the different tissue components when using six KL basis vectors. From Fig. 7(b), it is seen that the fiber region has a large residual at the 8th band. So we determined the band to be enhanced as n = 8. The resultant images of the color enhancement are shown in Fig. 8. Because of the shape of the H&E stained transmittance of fiber, Fig. 7(a), the color hardly changes even if the spectral transmittance in 8th band is enhanced. While the enhanced features are not readily visible in the conventional method, the fiber region in the H&E stained image was enhanced to blue, which is similar to its color in the MT stained image, in the proposed method. Since the color is visualized similar to that of MT stained tissue specimen, it would be easier for pathologists to evaluate the result by comparing it with the conventional physical staining technique.

Fig. 8 The enhanced results of the H&E stained tissue (n = 8,k = 30). (a) The conventional method. (b) The proposed method.

Method III was also applied to the same H&E stained pathological image. In method III the spectrum g d for the visualization is determined automatically based on the hue in CIE L*C*h color space. First, the average L *, a * and b * are calculated from the average spectrum of an image. Then, as written in Eq. (10), a*¯ and b*¯ are transformed into a*¯ and b*¯ which indicate opposite hue or complementary color. Finally, they are transformed into XYZ tristimulus values and the spectrum g d is derived by Eq. (8). In this method, the spectrum g d has the opposite hue to the average hue of the entire image, regardless of the enhanced band n.

We determined the band to be enhanced as n = 8 again. The color g d for visualization and the background color g a were calculated automatically. It is believed that the enhanced result is improved by changing the luminance of the spectrum g d in cases when the luminance of the entire image is high. So additional enhancement processing was also performed, where L * for the spectrum g d was set as L * = 50. In the enhanced results shown in Fig. 9, the fiber regions are enhanced with a green color which is the perceptual opposite color to the average color of the H&E stained image. When the hues of all pixels in the entire image are similar, as in the present case, automatic color determination enables effective enhancement without intricacy of selecting the spectrum for visualization.

Fig. 9 The enhanced results of the H&E stained tissue by automatic definition (n = 8,k = 30). (a) L * for the spectrum gd is not changed. (b) L* = 50.

Table 4 shows the average color differences between the cytoplasm and fiber region, which are both stained with eosin in an H&E stained image. In this table, Method III (a) and (b) correspond to the results shown in Figs. 9(a) and 9(b), respectively. Both of these proposed methods resulted to larger color differences than the conventional method which indicate their effectiveness.

Table 4. The Color Differences Between the Cytoplasm Region and the Fiber Region

table-icon
View This Table
| View All Tables

3.3. Application to a hyperspectral image

In the conventional method, when an image has a large number of bands, such as hyperspectral images, amplified value in enhanced band is not clearly visualized because the impact of amplifying a single band is small. However, the proposed methods are effective on such images. In our experiment we applied the method II to the hyperspectral image of a rice paddy, shown in Fig. 10, and explored spectral features by observing enhanced results. The image was obtained by using a cargo crane with the hyperspectral sensor, ImSpector V10 made by Specim Co., which has 121 bands from 400–1000 nm, 3 nm of spectral resolution, and 5 nm of sampling interval [12

12. N. Kosaka, K. Uto, and Y. Kosugi, “ICA-aided mixed-pixel analysis of hyperspectral data in agricultural land,” IEEE Trans. Geosci. Remote Sens. 2, 220–224 (2005). [CrossRef]

]. However, the components of the longer wavelengths after 900 nm were not used as they include much noise. Each pixel value in the image was transformed into spectral reflectance with reference to the pixel value of the standard white board in the same image.

Fig. 10 Natural color presentation of a rice paddy image under D65 light source. The image size is 2000 × 400 pixels that was trimmed from the original image.

The hyperspectral image in Fig. 10 mainly consisted of crop, weed and soil, and we investigated their spectral features using the color enhancement in method II. KL basis vectors were generated from the region of the image where weed and soil are included. However, the region extracted for spectral samples consisted mainly of weeds, hence we assumed that one KL vector was sufficient to estimate the spectra of weeds. In this case, we used the first KL vector. The spectrum g d is the spectrum of magenta obtained from the Macbeth Color Checker image captured by the same hyperspectral camera. Given this condition, we enhanced the rice paddy image in the bands from 500–900 nm at 50 nm sampling interval.

The enhanced hyperspectral images are shown in Figs. 11 and 12. In Fig. 11(b), the weed region and part of the crop region are enhanced with magenta color. Because only one KL basis vector was used and the spectra of the weed region vary widely, the weed region, which is used for generating basis vectors, was also enhanced. The average residual components of the different regions in the rice paddy image are shown in Fig. 13. In Fig. 13, ”Crop 1” represents the residual of the crop region which is not enhanced in Fig. 11(b), and ”Crop 2” represents the enhanced crop region. The spectral variations in crop regions are mainly due to the difference in their illumination condition such as shading. As shown in Fig. 13, the soil region has a large residual component around 700 nm, and the crop region has large residual error around 800 nm which could correspond to the biomass content [18

18. S. Itano, T. Akiyama, H. Ishida, T. Okubo, and N. Watanabe, “Spectral characteristics of aboveground biomass, plant coverage, and plant height in Italian Ryegrass (Lolium multiflorum L.) meadows,” Grassland Sci. 46, 1–9 (2000).

]. The enhanced spectral features between these wavelengths are shown in Figs. 11(e) and 12(b) as enhanced regions. Figure 14 shows the magnified part of the rice paddy image whose spectral feature at 700 nm and 725 nm was enhanced. We can see better contrast between the leaves and the crops in Fig. 14(b). This is due to the negative residual component of the crop region at 725 nm (Fig. 13). The original spectral data of each region are shown in Fig. 15. Here we see that the spectra of crop and weed regions have large difference in 680–750 nm, called ”red edge”, which originated from the spectral feature of chlorophyll, which is not observed in the soil region. Furthermore, the residual components in the crop regions are due to the differences of their spectral shapes in the near-infrared wavelength. As the above results show, the salient spectral features in the hyper-spectral image of rice paddy were successfully visualized by the proposed color enhancement, and such features can be applied to discriminate each region. Further investigation of spectral features in hyperspectral images using the proposed enhancement method could result to a new index for advanced vegetation analysis.

Fig. 11 The enhanced result of the rice paddy images (k = 20). (a) 500 nm, (b) 550 nm, (c) 600 nm, (d) 650 nm, (e) 700 nm are enhanced, respectively.
Fig. 12 The enhanced result of the rice paddy images (k = 20). (a) 750 nm, (b) 800 nm, (c) 850 nm, (d) 900 nm are enhanced, respectively. The weed and crop regions, which were not clearly differentiated in the original image (see Fig. 10), are now differentiated.
Fig. 13 The average residual components of the different regions in the rice paddy image. Each plot represents the average of 100 samples.
Fig. 14 600 × 400 pixels cropped from the magnified version of the rice paddy image enhanced at: (a) 700 nm; (b) 725 nm.
Fig. 15 The average spectral reflectances of the different regions in the rice paddy image. Each plot represents the average of 100 samples.

4. Conclusion

This paper proposes a method for the effective visualization of the enhanced spectral features, in which the design of a weighting factor matrix is modified so that the enhanced feature appears with arbitrary color. Some examples on the methods to determine the color for visualization are also presented. Even if an image has a salient spectral feature in the invisible wavelength range or has a large number of spectral bands, the spectral feature can still be enhanced and effectively visualized with the proposed method. The method will be useful in exploring the spectral features masked in multispectral or hyperspectral images.

The authors greatly acknowledge Dr. Yukako Yagi in Harvard Medical School, Boston, MA, U.S. for helpful advices and discussion.

References and links

1.

Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters. I. A semianalytical model,” Appl. Opt. 37, 6329–6338 (1998). [CrossRef]

2.

J. A. Gualtieri and R. F. Cromp, “Support vector machines for hyperspectral remote sensing classification,” Proc. SPIE 3584, 221–232 (1999). [CrossRef]

3.

B.-C. Gao, M. J. Montes, Z. Ahmad, and C. O. Davis, “Atmospheric correction algorithm for hyperspectral remote sensing of ocean color from space,” Appl. Opt. 39, 887–896 (2000). [CrossRef]

4.

M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, and N. Ohyama, “Color image reproduction based on the multispectral and multiprimary imaging: Experimental evaluation,” Proc. SPIE 4663, 15–26 (2002). [CrossRef]

5.

J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002). [CrossRef]

6.

A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. I. Decorrelation and HSI contrast stretches,” Remote Sens. Environ. 20, 209–235 (1986). [CrossRef]

7.

J. Ward, V. Magnotta, N. C. Andreasen, W. Ooteman, P. Nopoulos, and R. Pierson, “Color enhancement of multispectral MR images: Improving the visualization of subcortical structures,” J. Comput. Assist. Tomogr. 25, 942–949 (2001). [CrossRef] [PubMed]

8.

M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral image using the Karhunen-Loeve transform,” Opt. Rev. 12, 69–75 (2005). [CrossRef]

9.

M. Yamaguchi, M. Mitsui, Y. Murakami, H. Fukuda, N. Ohyama, and Y. Kubota, “Multispectral color imaging for dermatology: application in inflammatory and immunologic diseases,” in Proceedings of 13th Color Imaging Conference (Society for Imaging Science and Technology/Society for Information Display, 2005), pp. 52–58.

10.

Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001). [CrossRef]

11.

P. A. Bautista, T. Abe, M. Yamaguchi, and N. Ohyama, “Multispectral image enhancement for H&E stained pathological tissue specimens,” Proc. SPIE 6918, 691836 (2008). [CrossRef]

12.

N. Kosaka, K. Uto, and Y. Kosugi, “ICA-aided mixed-pixel analysis of hyperspectral data in agricultural land,” IEEE Trans. Geosci. Remote Sens. 2, 220–224 (2005). [CrossRef]

13.

D. Scribner, P. Warren, J. Schuler, M. Satyshur, and M. Kruer, “Infrared color vision: an approach to sensor fusion,” Opt. Photon. News 9, 27–32 (1998). [CrossRef]

14.

D. Scribner, P. Warren, and J. Schuler, “Extending color vision methods to bands beyond the visible,” Machine Vision Appl. 11, 306–312 (2000). [CrossRef]

15.

M. Vilaseca, J. Pujol, M. Arjona, and F. M. Martínez-Verdú, “Color visualization system for near-infrared multispectral images,” J. Imaging Sci. Technol. 49, 246–255 (2005).

16.

N. P. Jacobson and M. R. Gupta, “Design goals and solutions for display of hyperspectral images,” IEEE Trans. Geosci. Remote Sens. 43, 2684–2692 (2005). [CrossRef]

17.

P. A. Bautista, T. Abe, M. Yamaguchi, Y. Yagi, and N. Ohyama, “Digital staining for multispectral images of pathological tissue specimens based on combined classification of spectral transmittance,” Comput. Med. Imaging Graph. 29, 649–657 (2005). [CrossRef] [PubMed]

18.

S. Itano, T. Akiyama, H. Ishida, T. Okubo, and N. Watanabe, “Spectral characteristics of aboveground biomass, plant coverage, and plant height in Italian Ryegrass (Lolium multiflorum L.) meadows,” Grassland Sci. 46, 1–9 (2000).

OCIS Codes
(100.2000) Image processing : Digital image processing
(100.2980) Image processing : Image enhancement
(110.4234) Imaging systems : Multispectral and hyperspectral imaging

ToC Category:
Image Processing

History
Original Manuscript: January 11, 2011
Revised Manuscript: April 7, 2011
Manuscript Accepted: April 15, 2011
Published: April 28, 2011

Citation
Noriaki Hashimoto, Yuri Murakami, Pinky A. Bautista, Masahiro Yamaguchi, Takashi Obi, Nagaaki Ohyama, Kuniaki Uto, and Yukio Kosugi, "Multispectral image enhancement for effective visualization," Opt. Express 19, 9315-9329 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-10-9315


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters. I. A semianalytical model,” Appl. Opt. 37, 6329–6338 (1998). [CrossRef]
  2. J. A. Gualtieri and R. F. Cromp, “Support vector machines for hyperspectral remote sensing classification,” Proc. SPIE 3584, 221–232 (1999). [CrossRef]
  3. B.-C. Gao, M. J. Montes, Z. Ahmad, and C. O. Davis, “Atmospheric correction algorithm for hyperspectral remote sensing of ocean color from space,” Appl. Opt. 39, 887–896 (2000). [CrossRef]
  4. M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, and N. Ohyama, “Color image reproduction based on the multispectral and multiprimary imaging: Experimental evaluation,” Proc. SPIE 4663, 15–26 (2002). [CrossRef]
  5. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002). [CrossRef]
  6. A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. I. Decorrelation and HSI contrast stretches,” Remote Sens. Environ. 20, 209–235 (1986). [CrossRef]
  7. J. Ward, V. Magnotta, N. C. Andreasen, W. Ooteman, P. Nopoulos, and R. Pierson, “Color enhancement of multispectral MR images: Improving the visualization of subcortical structures,” J. Comput. Assist. Tomogr. 25, 942–949 (2001). [CrossRef] [PubMed]
  8. M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral image using the Karhunen-Loeve transform,” Opt. Rev. 12, 69–75 (2005). [CrossRef]
  9. M. Yamaguchi, M. Mitsui, Y. Murakami, H. Fukuda, N. Ohyama, and Y. Kubota, “Multispectral color imaging for dermatology: application in inflammatory and immunologic diseases,” in Proceedings of 13th Color Imaging Conference (Society for Imaging Science and Technology/Society for Information Display, 2005), pp. 52–58.
  10. Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001). [CrossRef]
  11. P. A. Bautista, T. Abe, M. Yamaguchi, and N. Ohyama, “Multispectral image enhancement for H&E stained pathological tissue specimens,” Proc. SPIE 6918, 691836 (2008). [CrossRef]
  12. N. Kosaka, K. Uto, and Y. Kosugi, “ICA-aided mixed-pixel analysis of hyperspectral data in agricultural land,” IEEE Trans. Geosci. Remote Sens. 2, 220–224 (2005). [CrossRef]
  13. D. Scribner, P. Warren, J. Schuler, M. Satyshur, and M. Kruer, “Infrared color vision: an approach to sensor fusion,” Opt. Photon. News 9, 27–32 (1998). [CrossRef]
  14. D. Scribner, P. Warren, and J. Schuler, “Extending color vision methods to bands beyond the visible,” Machine Vision Appl. 11, 306–312 (2000). [CrossRef]
  15. M. Vilaseca, J. Pujol, M. Arjona, and F. M. Martínez-Verdú, “Color visualization system for near-infrared multispectral images,” J. Imaging Sci. Technol. 49, 246–255 (2005).
  16. N. P. Jacobson and M. R. Gupta, “Design goals and solutions for display of hyperspectral images,” IEEE Trans. Geosci. Remote Sens. 43, 2684–2692 (2005). [CrossRef]
  17. P. A. Bautista, T. Abe, M. Yamaguchi, Y. Yagi, and N. Ohyama, “Digital staining for multispectral images of pathological tissue specimens based on combined classification of spectral transmittance,” Comput. Med. Imaging Graph. 29, 649–657 (2005). [CrossRef] [PubMed]
  18. S. Itano, T. Akiyama, H. Ishida, T. Okubo, and N. Watanabe, “Spectral characteristics of aboveground biomass, plant coverage, and plant height in Italian Ryegrass (Lolium multiflorum L.) meadows,” Grassland Sci. 46, 1–9 (2000).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited