OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editor: Gregory W. Faris
  • Vol. 2, Iss. 1 — Jan. 19, 2007
« Show journal navigation

Three-dimensional identification of biological microorganism using integral imaging

Bahram Javidi, Inkyu Moon, and Seokwon Yeom  »View Author Affiliations


Optics Express, Vol. 14, Issue 25, pp. 12096-12108 (2006)
http://dx.doi.org/10.1364/OE.14.012096


View Full Text Article

Acrobat PDF (549 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper, we address the identification of biological microorganisms using microscopic integral imaging (II). II senses multiview directional information of 3D objects illuminated by incoherent light. A micro-lenslet array generates a set of elemental images by projecting a 3D scene onto a detector array. In computational reconstruction of II, 3D volumetric scenes are numerically reconstructed by means of a geometrical ray projection method. The identification of the biological samples is performed using the 3D volume of the reconstructed object. In one approach, the multivariate statistical distribution of the reference sample is measured in 3D space and compared with an unknown input sample by means of statistical discriminant functions. The multivariate empirical cumulative density of the 3D volume image is determined for classification. On the other approach, the graph matching technique is applied to 3D volumetric images with Gabor feature extraction. The reference morphology is identified in unknown input samples using 3D grids. Experimental results are presented for the identification of sphacelaria alga and tribonema aequale alga. We present experimental results for both 3D and 2D imaging. To the best of our knowledge, this is the first report on 3D identification of microorganisms using II.

© 2006 Optical Society of America

1. Introduction

Recently [1–3

1. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005), http://www.opticsinfobase.org/abstract.cfm?id=84327. [CrossRef] [PubMed]

], holographic techniques [4

4. T. Kreis, ed., Handbook of holographic interferometry, (Wiley, VCH, 2005).

] have been considered for real-time threedimensional (3D) sensing and identification of biological microorganisms. A single-exposure on-axis hologram formed on an image sensor is used to record the Fresnel field of the object illuminated by coherent light. The recorded Fresnel field of the biological micro-object is numerically reconstructed and image recognition techniques [5–8

5. A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, “Design and application of quadratic correlation filters for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004). [CrossRef]

] can be applied. However, there are drawbacks in digital holography, such as the requirements for a coherent light source, stable recording environments, and the presence of speckle noise.

In this paper, we present experiments to illustrate the identification of biological microorganisms using integral imaging (II). Our objective is to show that II, with appropriate algorithms can be used to identify biological microorganisms. II [2

2. B. Javidi, S. Yeom, I. Moon, and M. Daneshpanah, “Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events,” Opt. Express 14, 3806–3829 (2006), http://www.opticsinfobase.org/abstract.cfm?URI=oe-14-9-3806. [CrossRef] [PubMed]

, 9–18

9. B. Javidi and F. Okano eds, Three dimensional television, video, and display technologies, (Springer, Berlin, 2002).

] is a conventional 3D display technique based on the multi-view directional information of a 3D scene. A microlenslet array generates a set of 2D elemental images on an imaging sensor. The scenes captured in elemental images have different perspectives according to the corresponding lenslets. The object in II can be illuminated by ambient or incoherent light. In contrast, holography requires coherent illumination to record the Fresnel field of the scene using interferometry.

In the computational reconstruction of II images [14–17

14. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction”, Opt. Lett. 26, 157–159 (2001). [CrossRef]

], the volumetric information of 3D objects is numerically reconstructed by means of a ray projection method. Elemental images are projected through a virtual pinhole array at arbitrary longitudinal distances (depths) on a computer. Volumetric scenes can then be reconstructed to reproduce the original 3D information. Various techniques have been researched to improve the resolution of the computational reconstruction, including the moving array lenslet technique (MALT) [16–17

16. A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging (CII),” Appl. Opt. 42, 7036–7042 (2003). [CrossRef] [PubMed]

].

There are significant benefits in developing automatic and real-time systems for biological microorganism recognition. Conventional bio-chemical inspection methods are generally labor intensive, time consuming, and require skilled personnel. Hence, there has been interest in the development of automatic and real-time systems for the recognition of microorganisms for various applications, such as monitoring harmful bacteria, biological weapon detection for security and defense, diagnosis of diseases, food safety investigation, and ecological monitoring. It is noted that most image-based efforts in this area use ordinary 2D images [19–21

19. A. L. Amaral, M. da Motta, M. N. Pons, H. Vivier, N. Roche, M. Moda, and E. C. Ferreira, “Survey of protozoa and metazoa populations in wastewater treatment plants by image analysis and discriminant analysis,” Environmentrics 15, 381–390 (2004). [CrossRef]

].

In this paper, we present two methods for the identification of biological microorganisms that use the 3D reconstruction volume from II. The first method uses multivariate statistical analysis, and the other is morphology-based recognition using the rigid graph matching (RGM) technique. Frameworks of the proposed approaches are presented in Fig. 1.

Fig. 1. Block diagrams for volumetric 3D recognition of biological microorganisms using (a) the multivariate statistical approach and (b) the morphology-based approach.

The paper is organized as follows. In Section 2, we present a brief review of II recording and reconstruction. The design procedure of 3D recognition using a multivariate statistical method is described in Section 3. In Section 4, the decomposition process using Gabor-based wavelets and the RGM algorithm is described. Experimental results are illustrated in Section 5. Conclusions follow in Section 6.

2. Overview of integral imaging (II) recording and reconstruction

In II, a micro-lenslet array captures light rays emanating from 3D objects as shown in Fig. 2. The light rays that pass through each micro-lenslet are recorded on a 2D imaging sensor, such as a charge-coupled device (CCD) detector. Therefore, each micro-lenslet generates a 2D elemental image containing directional information of the 3D object. The captured elemental images have different perspectives of the 3D object according to the location of the microlenslet.

Fig. 2. A schematic setup of II for microorganism recording.

The reconstruction process is the reverse of the recording process. In the optical display, 3D scenes are reconstructed by the intersection of discrete rays coming from the elemental images. The irradiance of the elemental images is propagated through the micro-lenslet array to form the 3D scene. The 3D display of II provides autostereoscopic 3D images with full and continuous parallax [9–11

9. B. Javidi and F. Okano eds, Three dimensional television, video, and display technologies, (Springer, Berlin, 2002).

]. Computational reconstruction of II has been investigated to improve the quality of the optically reconstructed images, which can be degraded by optical display devices [15–17

15. S. Hong, J. Jang, and B. Javidi, “Three-dimensional volulmetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

, 27

27. Y. Frauel, O. Matoba, E. Tajahuerce, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for 3D object recognition,” Appl. Opt. 43, 452–462 (2004). [CrossRef] [PubMed]

]. Considering that rays from voxels on the 3D object surface contribute to pixels of elemental images during recording, volumetric images can be computationally reconstructed by an inverse mapping based on geometrical ray optics. In computational reconstruction, the elemental images are numerically projected through a virtual pinhole array to reproduce the original 3D information.

In this paper, II is used to obtain a 3D image of the microorganisms by a single exposure. Computational reconstruction of II generates 3D volumetric data, and reconstructed images have different focused regions depending on the longitudinal distances along the optical axis [15–17

15. S. Hong, J. Jang, and B. Javidi, “Three-dimensional volulmetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

, 27

27. Y. Frauel, O. Matoba, E. Tajahuerce, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for 3D object recognition,” Appl. Opt. 43, 452–462 (2004). [CrossRef] [PubMed]

]. By utilizing II sensing and reconstruction, we are able to sense micro-objects of interest, which can be located at arbitrary depth levels. The reconstructed volumetric images may present the 3D profile of the objects where the focused regions are decided by ray optics reproducing the structure of microorganisms in 3D space.

3. Microorganism recognition using the multivariate statistical method

In the following, we describe the design procedure for 3D recognition and identification of biological microorganisms using the II technique and the multivariate statistical method. There are advantages to using the II technique for the recognition of biological microorganisms. II can produce a number of depth images and multi-angle views of the scene. It enables us to obtain many variables that we require in order to recognize and identify the microorganisms. These variables can give us information about inter-correlations among the variables. Moreover, we can determine the contribution of each variable in the presence of the other variables. Hence, the discrimination performance of the system is enhanced by using the important information contained in these variables.

First, we reconstruct the volumetric 3D images of reference biological microorganisms that consist of p section images by the geometrical ray projection method. Let the multidimensional data set be described by X p, which is a combination of feature vectors [X 1X p]. We randomly select n test pixel points from a well-focused one section image of the reconstructed volumetric 3D image, where we choose enough samples to use in estimating a reference population distribution [28

28. C. E. Lunneborg, Data analysis by resampling: concepts and applications, (Duxbury Press, 1999).

]. For classification of biological microorganisms based on multi-dimensional data sets, reference M-ECDF Fref can be obtained by extracting n pixel values from each section image at the same test pixel points. It is also possible to analyze the variation of the data in the longitudinal direction at the fixed test pixel points. Given n ordered data points Xp (1), Xp (2), Xp (3), … Xp (n), the reference M-ECDF of the section images can be represented as follows:

FX1,,Xp(x1,,xp)=P(X1x1,,Xpxp)=#{X1(n)x1,,Xp(n)xp}n×p,
(1)

where Xp is the randomly selected pixel value in the pth section image and #{A} is the number of times the event A occurs. In order to obtain the statistical distribution of the criterion discriminant function of the reference data set for a statistical decision rule, we define the following criterion discriminant function for the null hypothesis:

Λ̂(x1,,xp)=Fref(x1,,xp)Fref(x1,,xp)+Fref(x1,,xp),
(2)

where F′ref is obtained by generating n′×p random sample data distributed according to the reference M-ECDF Fref . We can obtain the statistical sampling distribution for the criterion discriminant functions by generating multiple F′ref and substituting the resulting values in Eq. (2). The test statistics Λˆ enables us to convert a multi-dimensional data set into a one dimensional distribution and obtain the statistical sampling distribution for the null hypothesis. We then define the following discriminant function for comparing two multidimensional data sets:

Λ(x1,,xp)=Fref(x1,,xp)Fref(x1,,xp)+Finput(x1,,xp),
(3)

where Finput is obtained by randomly extracting n′ pixel values from each section image of a unknown input microorganism at the same test pixel points. The values of Λ will lie between 0 and 1. We can obtain the statistical distribution for the discriminant functions by generating multiple Finput of the unknown input data set and substituting the resulting values into Eq. (3).

If the two multi-dimensional data sets, Fref and Finput , are similar, the statistical distribution of Λ has sharp and peak points close to Λ=0.5. The statistical distribution of the criterion discriminant functions for the null hypothesis Λˆ has its highest value at Λˆ=0.5. Finally, we calculate the mean-square-distance (MSD) for comparing the actual multi-dimensional discriminant functions Λwith the criterion discriminant functions Λˆ:

MSD=d̂=E{[Λ(x1,,xp)Λ̂(x1,,xp)]2},
(4)

where E{·} is expectation operator. The statistical decision for identification of biological microorganisms can be achieved with a hypothesis test on the criterion discriminant function. This statistical approach for 3D recognition is suitable for recognizing the microorganisms, such as bacteria and biological cells, which do not have well defined shapes or profiles. Therefore, it allows our 3D recognition system to be robust to variations in the shape of the microorganisms.

4. Morphology-based recognition using Gabor-based wavelets and the RGM technique

The elementary function of the Gabor-based wavelets has the form of a Gaussian envelope modulated by a complex sinusoidal function [24

24. J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. 2, 1160–1169 (1985). [CrossRef]

, 25

25. T. S. Lee, “Image representation using 2D Gabor wavelets,” IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959–971 (1996). [CrossRef]

]:

guv(x)=kuv2σ2exp(kuv2x22σ2)[exp(jkuv·x)exp(σ22)],
(5)

where x is a 2D discrete position vector, k uv is a wave number vector, and σ is proportional to the standard deviation of the Gaussian envelope. k uv is defined as k uv=k 0u[cosϕν sinϕν ]t, where k 0u=k 0/δ u-1, ϕν =[(ν-1)/V]π, u=1, …, U, and ν=1, …, V. k 0u is the magnitude of the wave number vector, ϕν is the azimuth angle of the wave number vector, k 0 is the maximum carrier frequency of the Gabor kernels, δ is the spacing factor in the frequency domain, U and V are the total number of decompositions along the radial and tangential directions, respectively, and superscript t denotes transpose.

The Gabor-based wavelets perform band-pass filtering with spatial and orientation frequency bandwidths depending on the Gaussian envelope. The carrier frequency of the band-pass filter is determined by k uv. By changing the magnitude and direction of the vector k uv, we can scale and rotate the Gabor kernel to make self-similar forms, so that each Gabor kernel covers the frequency range selectively. The parameterization of 2D Gabor-based wavelets has been investigated in [24

24. J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. 2, 1160–1169 (1985). [CrossRef]

, 25

25. T. S. Lee, “Image representation using 2D Gabor wavelets,” IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959–971 (1996). [CrossRef]

]. It might be desirable to set the parameters of Gabor kernels for the complete representation of images. In this paper, the parameters are set at σ=π, k 0=π/8, δ=√2, U=6, and V=12.

Let yuv be the filtered output of the image o after it is convolved with the Gabor kernel guv:

yuv=guv*o,
(6)

where the operator * denotes 2D convolution, and o is the reconstructed image. The output yuv is referred to as the Gabor coefficient. The rotation-invariant property can be achieved by adding up all of the Gabor coefficients along the tangential directions of the frequency domain. Therefore, we can define a rotation-invariant node vector at x i, which is the location of pixel i, as

v(xi)=[v=1Vy1v(xi)v=1Vyuv(xi)]t.
(7)

It is noted that v(x i) is a U-dimensional complex vector.

We apply the RGM technique [1–3

1. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005), http://www.opticsinfobase.org/abstract.cfm?id=84327. [CrossRef] [PubMed]

, 26

26. M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEE Trans. Comput. 42, 300–311 (1993). [CrossRef]

] with 3D grids to the 3D reconstructed volume images. To use the RGM technique, the Gabor coefficients are defined on a graph, which is a set of nodes associated in the local area. The graphs are superimposed on a reference image and an unknown input image. The RGM technique performs template matching between the two sets of Gabor coefficients, measuring the geometrical shapes between the reference object and the unknown input sample.

The 2D graph matching technique is applied to 3D layers of images reconstructed at several depth levels. Two 3D grids are superimposed on volumetric images of the reference and input microorganisms; therefore, the reference morphology is defined on a 3D grid along a certain length of longitudinal distance. Similar morphology with the reference is searched using the 3D grid on volumetric images of unknown input microorganisms.

Let R and S be two identical and rigid 3D grids placed on the 3D reference image set Ωr and the unknown 3D input image set Ωs, respectively. The reference image set Ωr and the input image set Ωs are composed of images reconstructed at different depths:

Ωr={or(djr);j=1,,D}andΩs={os(djs);j=1,,D},
(8)

where or and os are the reference and input images, respectively; djr and djs are the reconstruction depths for the reference and input images, respectively; and D is the number of depth levels. The location of the reference graph in each reference image or is predetermined by a translation vector p r, and a counter clock-wise rotation angle θr . Position vectors of nodes in each reference image can be computed as

xk(pr,θr)=Aθr(xkoxco)+pr,k=1,,Kgrid,
(9)

and

Aθ=[cosθsinθsinθcosθ],
(10)

where xko and xco are the position vectors of the node k and the center of the grid without any translation and rotation, respectively; and Kgrid is the number of nodes in the grid for one reconstructed image.

ΓRS(ps,θs)=1D×Kgridj=1Dk=1Kgridvr[pr,θr;djr],vs[xk(ps,θs;djs]vr[xk(pr,θr;djr)]||||vs[xk(ps,θs;djs)],
(11)

where 〈·〉 stands for inner product; v r[x k(p r, θr ; djr )] and v s[x k(p s, θs ; djs )] are two complex rotation-invariant node vectors of the graph R in the reference image set and the graph S in the unknown input image set, respectively. The local area, which is covered by the graph S, is identified with the reference shape if the following condition is satisfied:

ΓRS(ps,θ̂s)>αΓ,
(12)

where α Γ is a threshold for the similarity function; and θˆs is obtained by searching the best matching angle to maximize the similarity function at the position vector p s as

θ̂s=maxθsΓRS(ps,θs).
(13)

5. Experimental results

Two filamentous microorganisms, sphacelaria alga and tribonema aequale alga are captured by the microscopic II system. A schematic diagram of the experimental setup for II recording is shown in Fig. 2. The approximate size of sphacelaria alga and tribonema aequale alga is 50~100µm and 10~20µm, respectively. The size of one pixel in the CCD array is 7.4×7.4µm. The distance between the lens array and the reconstructed plane of the object is approximately 300µm, including the cover glass thickness. The magnification of the objective lens is approximately 20. The size of each elemental image is 180×180 pixels. Because objects are reconstructed with these elemental images, the resolution of the reconstructed object may not be sufficient to see a clear reconstructed image. However, the resolution of the reconstructed image can be improved using the MALT.

Fig. 3. Sections of elemental images for (a) sphacelaria alga and (b) tribonema aequale alga.
Fig. 4. Reconstructed images at depth d=330 µm for (a) sphacelaria alga and (b) tribonema aequale alga.

Figure 3 shows a section of elemental images of sphacelaria alga and tribonema aequale alga, respectively. Multiple-layers of microorganism images are reconstructed at different depth levels by the ray projection method. A total of 12 images are reconstructed at depths d=300, 306, …, 366 µm for each microorganism, where d is the distance between the lens array and the reconstructed plane of the object. In II, a micro-lenslet array captures light rays as shown in Fig. 2. The light sources that are diffused from the 3D objects pass through each micro-lenslet and are recorded on a 2D imaging sensor, such as a CCD detector. Each microlenslet generates a 2D elemental image containing directional information of the 3D object. In the microscope system, to get magnified elemental images of object, an objective lens is used between the lens array and the CCD. The minimum distance of the reconstructed object is the same as the thickness of the cover glass of the slide.

5.1 3D II recognition using the multivariate statistical method

To evaluate the proposed 3D recognition system, we reconstruct 15 section images of sphacelaria alga and tribonema aequale alga corresponding to the reference data sets and the input data set by the II technique. The volumetric 3D image is reconstructed by the geometrical ray projection method [15

15. S. Hong, J. Jang, and B. Javidi, “Three-dimensional volulmetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

]. We automatically remove the background parts of the reconstructed image by inspecting the histogram of the image and applying the Canny edge detection algorithm to the image data [29

29. R. C. Gonzalez and R. E. Woods, Digital imaging processing, (Prentice Hall, 2002).

].

Fig. 5. The statistical distribution of the criterion discriminant function [see Eq. (3)] generated from the multi-dimensional data sets.

We randomly select test pixel points from one well-focused section image, corresponding to the reference data, to form the reference M-ECDF, where 5000 pixel points were chosen. Then, we generate a multi-dimensional reference data set by selecting the pixel value of each section image at the same test pixel points. The statistical distribution of the criterion discriminant functions for the multi-dimensional reference data set is shown in Fig. 5, where we obtain Fref by selecting 500×15 random data samples distributed according to the reference M-ECDF and generate Fref 500 times to construct statistical sampling distribution of the criterion discriminant function Λˆ for the null hypothesis (Fref =Finput ). As we expected, the value of the criterion discriminant function has a peak value at 0.5.

Fig. 6. The mean-square-distance (MSD) between the criterion discriminant function and the actual discriminant function [see Eq. (2)] generated from the multiple section images. Null hypothesis is the true training class. The data is obtained from 3D II volume images. (a) null hypothesis and (b) non-training true class and false class.
Fig. 7. Results for one 2D image. The MSD between the criterion discriminant function and the actual discriminant function [see Eq. (2)] generated from one section image. Null hypothesis is the training true class. The data is obtained from 2D reconstructed image. (a) null hypothesis, (b) non-training true class, and (c) false class.

To test the performance of the proposed 3D recognition system, we obtain the statistical distribution of the actual discriminant functions Λ for the multi-dimensional data sets of the true and false classes, respectively, according to similar procedures. The statistical distribution of the actual discriminant function Λ is formed by generating Finput 200 times and substituting the resulting values into Eq. (3), where this procedure can be repeated more often to obtain the precise statistical sampling distribution for the test statistics, Λ.

Figure 6 shows the statistical distribution of the MSD between the averaged criterion discriminant function <Λˆ(x 1, …, xp )> and the actual discriminant function Λ(x 1, …,xp ) of the true class and false classes, respectively. We obtain the statistical distribution of the MSD for the null hypothesis by calculating the MSD between <Λˆ(x 1, …,xp )> and Λˆ (x 1, …, xp ) to make a statistical decision rule. As shown in Fig. 6, the maximum value of the MSD for the null hypothesis is 0.0003 and the mean values of the MSD for the true and false class are 0.00026 and 0.0016, respectively. It is noted that all of the false classes over 200 trial data sets are above the maximum value of the MSD for the null hypothesis. To compare the value of the calculated MSD using only one section image (a well focused image), we also calculate the MSD between the averaged criterion discriminant function <Λˆ(x)> and the actual discriminant function Λ(x) using the data set that consisted of only one section 2D image.

Figure 7 shows the statistical distribution of the calculated MSD between <Λˆ(x)> and Λ(x) for the true and the false classes, respectively. As shown in Fig. 7, the maximum value of the MSD for the null hypothesis is 0.0033 and the mean values of MSD for the true and false classes are 0.0005 and 0.0006, respectively. It is noted that all of the 200 trial data sets for true and false classes are below the maximum value of the MSD for the null hypothesis. In this case, it is difficult to measure the similarity or dissimilarity between two data sets using only one section image. Thus, preliminary experimental results indicate that it may be possible to classify microorganisms using volumetric 3D images obtained by II and the multivariate statistical method.

5.2 3D II recognition using the morphology-based method

In this subsection, the RGM technique is applied to volumetric scenes using 3D grids. The 3D data is reconstructed using the II sensing and reconstruction method [15

15. S. Hong, J. Jang, and B. Javidi, “Three-dimensional volulmetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

]. The parameters used for the Gabor-based wavelets were presented in Section 4. The Gabor-based wavelets are applied to the inversely normalized reconstructed images as discussed previously. The rectangular grids R and S are composed of 5×5×3 nodes and the distance between nodes is 10 pixels in both the x and y directions. In the z direction, the reference grid is superimposed on a reference image set, which is composed of 3 reconstructed images at depths of 324, 330, and 336 µm. Therefore, the number of depths (D) is 3, and the number of nodes in one image (Kgrid ) is 25. The reference grid is placed with p r=[116, 75]t and θr =30° in each reconstructed image. The location and orientation of the reference grid is manually decided in the experiments.

Fig. 8. An example of RGM results. (a) A reference graph in the center image of the reference set and (b) the input graphs detected in the center image of the fifth input set.

The tests are performed for 10 sets of reconstructed images for sphacelaria alga and tribonema aequale alga. The first set is composed of three images reconstructed at depths of 300, 306, and 312µm, and the second set is composed of images reconstructed at depths of 306, 312, and 318µm, and so on. Thus, the fifth set is identical to the reference image set for sphacelaria alga. Figures 9 and 10 show the maximum similarity function in Eq. (11) and the number of detections satisfying Eq. (12) for sphacelaria input image sets, respectively. As the reconstruction depths of the image set are closer to those of the reference images, more detections have been found showing strong similarity between the two image sets. No detection is found for any of the tribonema aequale alga input image set.

Fig. 9. Maximum similarity function for 10 input image sets of sphacelaria alga. Image set #5 is the referenced true class, others (1–4 and 6–10) are non-training true class image sets.
Fig. 10. Number of detections and the maximum similarity function for 10 input image sets of sphacelaria alga. Image set #5 is the referenced true class, others (1–4 and 6–10) are nontraining true class image sets.

7. Conclusions

In this paper, we have presented 3D identification of biological microorganisms by means of II. II records multiple views of a 3D scene with different perspectives by a single exposure. The volumetric information of the biological microorganism is reconstructed numerically by the ray projection method.

We have used volumetric 3D reconstruction using two identification methods. In one approach, we have estimated the statistical distributions of the biological microorganisms by the multivariate statistical method. The M-ECDF is computed using non-parametric methods for non-Gaussian distributed data. Then, we have calculated the MSD between the two discriminant functions to distinguish between the two data sets. We have presented experiments to measure the multivariate statistics obtained from II 3D data, and the results were compared with those obtained from 2D images.

In the morphology-based approach, the RGM technique is applied to multiple layers of reconstructed images using 3D grids. RGM is useful for identifying the reference shape in general cases that different species of microorganisms exist in the same sample, regardless of the curvature or shape of the microorganisms. We have shown that the extended RGM technique using 3D grids is suitable for investigating volumetric scenes reconstructed by the II system.

Acknowledgments

The authors wish to thank Dr. Yong-Seok Hwang and Dr. Seung-hyun Hong for their assistance with experiments.

References and links

1.

B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005), http://www.opticsinfobase.org/abstract.cfm?id=84327. [CrossRef] [PubMed]

2.

B. Javidi, S. Yeom, I. Moon, and M. Daneshpanah, “Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events,” Opt. Express 14, 3806–3829 (2006), http://www.opticsinfobase.org/abstract.cfm?URI=oe-14-9-3806. [CrossRef] [PubMed]

3.

S. Yeom, I Moon, and B. Javidi, “Real-time 3D sensing, visualization and recognition of dynamic biological micro-organisms,” Proceedings of IEEE 94, 550–566 (2006). [CrossRef]

4.

T. Kreis, ed., Handbook of holographic interferometry, (Wiley, VCH, 2005).

5.

A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, “Design and application of quadratic correlation filters for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004). [CrossRef]

6.

H. Kwon and N. M. Nasrabadi, “Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens. 43, 388–397 (2005). [CrossRef]

7.

P. Refregier, Noise theory and application to physics, (Springer, 2003).

8.

F. Sadjadi, ed., Selected papers on automatic target recognition, (SPIE-CDROM, 1999).

9.

B. Javidi and F. Okano eds, Three dimensional television, video, and display technologies, (Springer, Berlin, 2002).

10.

T. Okoshi, “Three-dimensional displays,” Proceedings of IEEE 68, 548–564 (1980). [CrossRef]

11.

M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

12.

R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5237–5242 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-21-5237. [CrossRef] [PubMed]

13.

J. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29, 1230–1232 (2004). [CrossRef] [PubMed]

14.

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction”, Opt. Lett. 26, 157–159 (2001). [CrossRef]

15.

S. Hong, J. Jang, and B. Javidi, “Three-dimensional volulmetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

16.

A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging (CII),” Appl. Opt. 42, 7036–7042 (2003). [CrossRef] [PubMed]

17.

S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003), http://www.opticsinfobase.org/abstract.cfm?URI=oe-11-26-3528. [CrossRef] [PubMed]

18.

S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13, 9310–9330 (2005), http://www.opticsexpress.org/abstract.cfm?id=86216. [CrossRef] [PubMed]

19.

A. L. Amaral, M. da Motta, M. N. Pons, H. Vivier, N. Roche, M. Moda, and E. C. Ferreira, “Survey of protozoa and metazoa populations in wastewater treatment plants by image analysis and discriminant analysis,” Environmentrics 15, 381–390 (2004). [CrossRef]

20.

S.-K. Treskatis, V. Orgeldinger, H. wolf, and E. D. Gilles, “Morphological characterization of filamentous microorganisms in submerged cultures by on-line digital image analysis and pattern recognition,” Biotechnol. Bioeng. 53, 191–201 (1997). [CrossRef] [PubMed]

21.

J. M. S. Cabral, M. Mota, and J. Tramper eds., Multiphase bioreactor design: chap2 image analysis and multiphase bioreactor, (London, Taylor & Francis, 2001). [CrossRef]

22.

M. Hollander and D. A. Wolfe, Nonparametric statistical methods, (Wiley, 1999).

23.

A. C. Rencher, Methods of multivariate analysis, (Wiley, 2002). [CrossRef]

24.

J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. 2, 1160–1169 (1985). [CrossRef]

25.

T. S. Lee, “Image representation using 2D Gabor wavelets,” IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959–971 (1996). [CrossRef]

26.

M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEE Trans. Comput. 42, 300–311 (1993). [CrossRef]

27.

Y. Frauel, O. Matoba, E. Tajahuerce, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for 3D object recognition,” Appl. Opt. 43, 452–462 (2004). [CrossRef] [PubMed]

28.

C. E. Lunneborg, Data analysis by resampling: concepts and applications, (Duxbury Press, 1999).

29.

R. C. Gonzalez and R. E. Woods, Digital imaging processing, (Prentice Hall, 2002).

OCIS Codes
(080.0080) Geometric optics : Geometric optics
(110.6880) Imaging systems : Three-dimensional image acquisition

ToC Category:
Imaging Systems

History
Original Manuscript: June 22, 2006
Revised Manuscript: October 2, 2006
Manuscript Accepted: October 12, 2006
Published: December 11, 2006

Virtual Issues
Vol. 2, Iss. 1 Virtual Journal for Biomedical Optics

Citation
Bahram Javidi, Inkyu Moon, and Seokwon Yeom, "Three-dimensional identification of biological microorganism using integral imaging," Opt. Express 14, 12096-12108 (2006)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-14-25-12096


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, "Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography," Opt. Express 13, 4492-4506 (2005). [CrossRef] [PubMed]
  2. B. Javidi, S. Yeom, I. Moon, and M. Daneshpanah, "Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events," Opt. Express 14, 3806-3829 (2006). [CrossRef] [PubMed]
  3. S. Yeom, I Moon, and B. Javidi, "Real-time 3D sensing, visualization and recognition of dynamic biological micro-organisms," Proceedings of IEEE 94, 550-566 (2006). [CrossRef]
  4. T. Kreis, ed., Handbook of holographic interferometry, (Wiley, VCH, 2005).
  5. A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, "Design and application of quadratic correlation filters for target detection," IEEE Trans. Aerosp. Electron. Syst. 40, 837-850 (2004). [CrossRef]
  6. H. Kwon and N. M. Nasrabadi, "Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery," IEEE Trans. Geosci. Remote Sens. 43, 388-397 (2005). [CrossRef]
  7. P. Refregier, Noise theory and application to physics, (Springer, 2003).
  8. F. Sadjadi, ed., Selected papers on automatic target recognition, (SPIE-CDROM, 1999).
  9. <bok>. B. Javidi and F. Okano eds, Three dimensional television, video, and display technologies, (Springer, Berlin, 2002).</bok>
  10. T. Okoshi, "Three-dimensional displays," Proceedings of IEEE 68, 548-564 (1980). [CrossRef]
  11. M. G. Lippmann, "Epreuves reversibles donnant la sensation du relief," J. Phys. 7, 821-825 (1908).
  12. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral and B. Javidi, "Enhanced depth of field integral imaging with sensor resolution constraints," Opt. Express 12, 5237-5242 (2004). [CrossRef] [PubMed]
  13. J. Jang and B. Javidi, "Three-dimensional integral imaging of micro-objects," Opt. Lett. 29, 1230-1232 (2004). [CrossRef] [PubMed]
  14. H. Arimoto and B. Javidi, "Integral three-dimensional imaging with digital reconstruction", Opt. Lett. 26, 157-159 (2001). [CrossRef]
  15. S. Hong, J. Jang, and B. Javidi, "Three-dimensional volulmetric object reconstruction using computational integral imaging," Opt. Express 12, 483-491 (2004), [CrossRef] [PubMed]
  16. A. Stern and B. Javidi, "3D image sensing and reconstruction with time-division multiplexed computational integral imaging (CII)," Appl. Opt. 42, 7036-7042 (2003). [CrossRef] [PubMed]
  17. S. Kishk and B. Javidi, "Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging," Opt. Express 11, 3528-3541 (2003). [CrossRef] [PubMed]
  18. S. Yeom, B. Javidi, and E. Watson, "Photon counting passive 3D image sensing for automatic target recognition," Opt. Express 13, 9310-9330 (2005), http://www.opticsexpress.org/abstract.cfm?id=86216. [CrossRef] [PubMed]
  19. A. L. Amaral, M. da Motta, M. N. Pons, H. Vivier, N. Roche, M. Moda, and E. C. Ferreira, "Survey of protozoa and metazoa populations in wastewater treatment plants by image analysis and discriminant analysis," Environmentrics 15, 381-390 (2004). [CrossRef]
  20. S.-K. Treskatis, V. Orgeldinger, H. wolf, and E. D. Gilles, "Morphological characterization of filamentous microorganisms in submerged cultures by on-line digital image analysis and pattern recognition," Biotechnol. Bioeng. 53, 191-201 (1997). [CrossRef] [PubMed]
  21. J. M. S. Cabral, M. Mota, and J. Tramper eds., Multiphase bioreactor design: chap2 image analysis and multiphase bioreactor, (London, Taylor & Francis, 2001). [CrossRef]
  22. M. Hollander and D. A. Wolfe, Nonparametric statistical methods, (Wiley, 1999).
  23. A. C. Rencher, Methods of multivariate analysis, (Wiley, 2002). [CrossRef]
  24. J. G. Daugman, "Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters," J. Opt. Soc. Am. 2, 1160-1169 (1985). [CrossRef]
  25. T. S. Lee, "Image representation using 2D Gabor wavelets," IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959-971 (1996). [CrossRef]
  26. M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, "Distortion invariant object recognition in the dynamic link architecture," IEEE Trans. Comput. 42, 300-311 (1993). [CrossRef]
  27. Y. Frauel, O. Matoba, E. Tajahuerce, and B. Javidi, "Comparison of passive ranging integral imaging and active imaging digital holography for 3D object recognition," Appl. Opt. 43, 452-462 (2004). [CrossRef] [PubMed]
  28. C. E. Lunneborg, Data analysis by resampling: concepts and applications, (Duxbury Press, 1999).
  29. R. C. Gonzalez and R. E. Woods, Digital imaging processing, (Prentice Hall, 2002).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited