OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editor: Gregory W. Faris
  • Vol. 1, Iss. 6 — Jun. 13, 2006
« Show journal navigation

Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events

Bahram Javidi, Seokwon Yeom, Inkyu Moon, and Mehdi Daneshpanah  »View Author Affiliations


Optics Express, Vol. 14, Issue 9, pp. 3806-3829 (2006)
http://dx.doi.org/10.1364/OE.14.003806


View Full Text Article

Acrobat PDF (2131 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper, we present an overview of three-dimensional (3D) optical imaging techniques for real-time automated sensing, visualization, and recognition of dynamic biological microorganisms. Real time sensing and 3D reconstruction of the dynamic biological microscopic objects can be performed by single-exposure on-line (SEOL) digital holographic microscopy. A coherent 3D microscope-based interferometer is constructed to record digital holograms of dynamic micro biological events. Complex amplitude 3D images of the biological microorganisms are computationally reconstructed at different depths by digital signal processing. Bayesian segmentation algorithms are applied to identify regions of interest for further processing. A number of pattern recognition approaches are addressed to identify and recognize the microorganisms. One uses 3D morphology of the microorganisms by analyzing 3D geometrical shapes which is composed of magnitude and phase. Segmentation, feature extraction, graph matching, feature selection, and training and decision rules are used to recognize the biological microorganisms. In a different approach, 3D technique is used that are tolerant to the varying shapes of the non-rigid biological microorganisms. After segmentation, a number of sampling patches are arbitrarily extracted from the complex amplitudes of the reconstructed 3D biological microorganism. These patches are processed using a number of cost functions and statistical inference theory for the equality of means and equality of variances between the sampling segments. Also, we discuss the possibility of employing computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms illuminated under incoherent light. Experimental results with several biological microorganisms are presented to illustrate detection, segmentation, and identification of micro biological events.

© 2006 Optical Society of America

1. Introduction

The development of reliable, automated, and low-cost methods for real-time detection and identification of harmful bacteria and viruses are of significant benefits and essential in combating catastrophic diseases. Such pandemics could create global disasters and the death toll could be in millions [1–2

1. The largely forgotten Influenza in 1918, a. k. a. “Spanish Flu” or “La Grippe” killed an estimated 40 million people worldwide, and an estimated 600,000 in the USA. It infected an estimated 20% of the world population. See Alfred Crosby, “America’s Forgotten Pandemic: The Influenza of 1918,” (Cambridge University Press, Cambridge, 1989).

]. Conventional methods in practice for inspecting most bacteria or viruses involve bio-chemical processing. In general, these techniques are labor intensive, require special skills, and are not real-time. Clearly, there could be vast applications for realtime automated recognition of microorganisms in a multitude of areas, including combating biological terrorism, security and defense, diagnosis of diseases, health care, food safety investigation and so on.

Real-time automatic recognition of living organisms is a very difficult task for a number of reasons. Biological microorganisms are dynamic events and not rigid objects. They can move, grow, and reproduce themselves, and vary in size and shape among the same species [3

3. J. W. Lengeler, G. Drews, and H. G. Schlegel, Biology of the prokaryotes, (New York, Blackwell science,1999).

]. In particular, bacteria and viruses are very small and simple morphological traits. They may occur as a single cell or form an association of various complexities according to the environmental conditions. Conventional methods in this field have been aimed to recognize cells through bio-chemical analyses. Most image-based recognition efforts for specific microorganisms have been based on two-dimensional (2D) intensity images [4–8

4. M. G. Forero, F. Sroubek, and G. Cristobal, “Identification of tuberculosis bacteria based on shape and color,” Real-time imag. 10, 251–262 (2004). [CrossRef]

] which may not be effective.

2D image processing and pattern recognition techniques have been extensively applied to identify objects in unknown scenes [9–18

9. A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, “Design and application of quadratic correlation filters for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004). [CrossRef]

]. Recently, there has been increased interest in three-dimensional (3D) optical imaging and automatic target recognition (ATR) [19–35

19. B. Javidi and F. Okano, eds., Three-dimensional television, video, and display technologies, (New York, Springer,2002).

].

Digital holography techniques [36–41

36. T. Kreis, ed., Handbook of Holographic Interferometry, (Wiley, VCH,2005).

] can be used for 3D image sensing [21–27

21. B. Javidi and E. Tajahuerce, “Three dimensional object recognition using digital holography,” Opt. Lett. 25, 610–612 (2000). [CrossRef]

]. Previously, computer synthesized holograms were used for complex spatial filtering [42

42. B. R. Brown and A. W. Lohmann, “Complex spatial filtering with binary masks,” Appl. Opt. 5, 967–969 (1966). [CrossRef] [PubMed]

]. Holographic microscopy [40–41

40. P. Ferraro, S. Grilli, D. Alfieri, S. D. Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano, “Extended focused image in microscopy by digital holography,” Opt. Express 13, 6738–6749 (2005). [CrossRef] [PubMed]

] is an attractive 3D imaging technique for acquisition and visualization of 3D information of the micro-biological objects. By means of digital holographic microscopy, one can obtain both magnitude and phase content of a microorganism. Single-exposure on-line (SEOL) digital holography [25–26

25. B. Javidi and D. Kim, “Three-dimensional-object recognition by use of single-exposure on-axis digital holography,” Opt. Lett. 30, 236–238 (2005). [CrossRef] [PubMed]

] for 3D image recognition has benefits compared with off-axis and/or phase-shifting on-axis digital holography. In particular, the SEOL holographic setup is simpler than its off-axis counterpart and it is more robust to input object size and scale variations. Since recording a hologram in the SEOL holographic setup requires a single-exposure, it is robust to sensor noise and environmental variation, thus it can be used for monitoring and studying dynamic events of microorganisms.

In this paper, we present an overview of several techniques for real-time automated 3D sensing, detection, visualization, segmentation, and recognition of microorganisms [28–33

28. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005). [CrossRef] [PubMed]

, 43

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

]. In particular, SEOL digital holography is employed for sensing and visualization of micro-biological objects. The optical setup of SEOL digital holography is based on the Mach-Zehnder interferometer to record the Fresnel diffraction field of microorganisms. The 3D complex amplitude of the microorganisms is computationally reconstructed at arbitrary depths along the optical axis without mechanical scanning.

Segmentation of microscopic objects can be accomplished using a number of approaches [43–46

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

]. One technique is bivariate jointly distributed region snakes method for segmentation of complex amplitude biological microorganism images [43

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

]. Living organisms are non-rigid objects and they vary in shape and size. Moreover, they often do not exhibit clear edges in computationally reconstructed SEOL holographic images. Thus conventional segmentation techniques based on the edge map may fail to segment these images appropriately. We present a statistical framework based on the joint probability distribution of magnitude and phase information of SEOL holographic microscopy images and maximum likelihood estimation of parameters for the joint probability density function. An optimization criterion is computed by maximizing the likelihood function of the target support hypothesis [47–49

47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).

]. The performance of the proposed method for the segmentation of reconstructed SEOL holographic microorganism images along with experimental results is presented.

In one 3D recognition approach [See Fig. 1(a)], after the segmentation of the microorganisms, the recognition of microorganisms can be performed by analyzing the 3D complex morphology of the computationally reconstructed holographic images. Gabor-based wavelets [50–52

50. J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. 2, 1160–1169 (1985). [CrossRef]

] extract features of the microorganisms by decomposing the reconstructed images in the spatial frequency domain. A feature matching technique follows which measures the similarity of 3D morphologies between a reference microorganism and unknown biological samples. The graph matching with Gabor-based wavelets has been used as a robust template matching which is tolerant to shift, rotation, and distortion [53–56

53. M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEE Trans. Comput. 42, 300–311 (1993). [CrossRef]

]. We may utilize the graph matching technique with Gabor features for automatic selection of feature vectors to be used in training and testing stages. In this case, trained features of the specific microorganisms will be stored in a database [29

29. S. Yeom, I. Moon, and B. Javidi, “Real-time 3D sensing, visualization and recognition of dynamic biological micro-organisms,” Proceedings of IEEE 94, 550–566 (2006). [CrossRef]

,30

30. S. Yeom and B. Javidi, “Three-dimensional recognition of microorganisms,” J. Bio. Opt. 11, 024017-1~8 (2006). [CrossRef]

].

As we discussed, automatic recognition of microorganisms is a difficult task because of their dynamic nature (moving, growing, and varying in size and shape). Therefore, an alternative recognition approach is developed that utilizes statistical inference theory for a shape-tolerant 3D recognition system as shown in Fig. 1(b). A number of sampling segments are randomly extracted from the reconstructed 3D image of microorganisms. By selecting arbitrary sampling segments and testing them through statistical inference, we can develop a recognition system which is independent of the shape of microorganisms. These sampling segments are processed using various cost functions including mean-squared distance (MSD), mean-absolute distance (MAD), and statistical inference using the sampling theory [47

47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).

]. The equality of means and equality of variances between the sampling segments of a reference microorganism and unknown input biological samples are tested for recognition. Student’s t distribution and Fisher’s F distribution are, respectively, used to analyze the difference of means and the ratio of variances of reconstructed microorganism images [47

47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).

,57

57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).

]. After calculating statistical parameters of the microorganisms, the data can be processed by training rules and then stored in the database.

As we will show in the experiments, spatially shift-invariant recognition of biological microorganisms can be obtained through the reconstructed volumetric image of an unknown input biological scene.

Fig. 1. Diagram of the approach for 3D sensing, visualization and recognition of microbiological objects using SEOL holographic microscopy, (a) 3D morphology-based recognition, (b) shape-tolerant 3D recognition.

In addition, 3D sensing, imaging, and recognition of biological microorganisms may be achieved by means of computational integral imaging (II). II sensing system can operate with incoherent light to generate multi-view perspectives of a 3D scene by using a micro-lens array [19

19. B. Javidi and F. Okano, eds., Three-dimensional television, video, and display technologies, (New York, Springer,2002).

,58–70

58. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

]. The volumetric information of the biological microorganism is reconstructed numerically by ray projection method.

The research described in this paper has a number of benefits: 1) the biological microorganisms are analyzed in 3D coordinates and complex magnitude topology; 2) the single-exposure on-line holographic sensor allows optimization of the space bandwidth product of detection as well as robustness to environmental variations during the sensing process; 3) multiple exposures are not required, thus, dynamic biological events can be detected in real-time; 4) a statistical segmentation technique based on complex amplitude reconstructed holographic images is developed; 5) a graph matching technique with Gabor features measures the similarity of 3D morphologies between a reference and unknown input microorganisms; and 6) shape-tolerant 3D microorganism recognition leads to promising recognition performance independent of the geometrical shape of microorganisms.

In Section 2, we present a brief overview of SEOL digital holography and its advantages for sensing micro-organic biological events. The segmentation of the complex-valued biological microorganism images using the regional segmentation method is presented in Section 3. Microorganism recognition using 3D complex morphology of the reconstructed images is presented in Section 4. Shape-tolerant recognition technique using statistical inference is presented in Section 5. Spatially shift-invariant recognition of microorganisms is discussed in Section 6. In Section 7, experimental results are demonstrated. The possibility of computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms is discussed in Section 8. Summary and conclusions follow in Section 9.

2. Overview of SEOL holographic microscopy

The block diagram for real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events is shown in Fig. 1. The first stage is SEOL holographic sensing and 3D reconstruction. The interference intensity patterns of a microorganism in the Fresnel diffraction field is recorded by the charge-coupled device (CCD) array as shown in Fig 2. A beam splitter divides the laser beam into object and reference waves. The laser beam illuminates the specimen magnified by the microscope objective. The SEOL digital hologram of a microorganism can be generated by the reference wave and the diffracted wave-fronts of the specimen. Our system requires only a single-exposure, therefore SEOL digital holography can be suitable for recognizing a moving 3D object and it is tolerant to external noise factors. The complex field distribution of a microorganism at the hologram plane can be represented as follows:

OH(x,y)=d0δ2d0+δ2exp[j2πzλ]jλzexp[jπλz(x2+y2)]×
{∫∫O(ε,η;z)exp[jπλz(ε2+η2)]exp[j2πλz(+yη)]dεdη}dz,
(1)

where d 0 is the distance between the center of a microorganism and the hologram plane; δ is the microorganism’s depth along z-axis; and O(ε,η]) is the field distribution of a microorganism at the object plane. The SEOL digital hologram of a microorganism at the hologram plane can be expressed as follows:

H(x,y)=OH(x,y)+R(x,y)2OH2R2,
(2)

where the reference beam’s intensity |R|2 is obtained by only a one time measurement on the experiment and the object beam’s intensity |O H|2 can be approximated by means of the local averaging technique [28–33

28. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005). [CrossRef] [PubMed]

].

Fig. 2. Experimental setup for recording SEOL digital hologram of 3D microorganisms; Ar: Argon laser, BS1, BS2: beam splitter; M1, M2: mirror; MO: microscope objective; CCD: charge-coupled device array.

The reconstruction of the original microorganism is performed digitally on a computer. The field distribution of the microorganism from SEOL digital hologram can be numerically reconstructed by the inverse Fresnel transformation:

O(ξ,η;τ=d0)=IFrT{H(x,y)}τ=d0=Ao(x,y)exp[jΦo(x,y)],
(3)

where IFrT{·} denotes the inverse Fresnel transformation. The reconstructed image from the SEOL digital hologram inevitably contains a conjugate image. This undesired component degrades the quality of the reconstructed 3D image, but the intrinsically defocused conjugate image also contains the information of the 3D microorganism. As an additional merit, SEOL digital holography allows us to obtain a dynamic time-varying scene which is digitally reconstructed on the computer for monitoring and recognizing moving and growing microorganisms.

3. Microorganism segmentation using bivariate region snakes

A critical step for microorganism identification is the segmentation of reconstructed images, which can facilitate proper detection and recognition. In this section, we address the segmentation of SEOL holographic images of microorganisms using bivariate jointly distributed region snakes [43

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

] which is based on statistically independent region snakes [44

44. O. Germain and P. Refregier “Optimal snake-based segmentation of a random luminance target on a spatially disjoint background,” Opt. Lett. 21 (1996). [CrossRef] [PubMed]

, 45

45. C. Chesnaud, V. Page, and P. Refregier , “Improvement in robustness of the statistically independent region snake-based segmentation method of target-shape tracking,” Opt. Lett. 23, 488–490 (1998). [CrossRef]

]. This technique is built on a statistical framework capable of handling images with complex-valued pixels and the joint probability distribution of magnitude and phase information of the scene. Within this framework, the optimization criterion is computed by maximizing the likelihood function of the target support hypothesis H w, while no knowledge of the statistical properties of the target/background is assumed as a priori. Instead, a maximum likelihood estimator estimates the necessary statistical parameters. Moreover, target and background pixels are assumed to have independent bivariate Gaussian distribution for their magnitude and phase contents, respectively.

This method uses the concept of snake active contours [43–46

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

] for separating the target from the background scene by a target support hypothesis. A snake is essentially a closed contour that can be approximated by a multi-node polygon, which evolves during the segmentation process to minimize a certain criterion known as the snake energy [46

46. M. Kass, A. Witkin, and D. Terzopoulus, “Snakes: Active contour models,” Int. J. Comput. Vision 1, 321–331 (1987). [CrossRef]

]. This contour divides the image into inner and outer regions which are denoted by Ωt (target) and Ωb (background), respectively. A stochastic algorithm is utilized to carry out the optimization and guide the deformations of the snake to eventually force the snake contour to converge to the original microorganism boundary [43

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

–45 ].

There are several advantages for using the bivariate jointly distributed region snake algorithm [43–45

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

]. In fact, the bivariate joint distribution of magnitude and phase information provides a more accurate image model for the reconstructed images of SEOL digital holography since it captures the correlation between each pixel’s magnitude and phase content. That is in contrast with independent distribution analysis, which treats the magnitude and phase information as independent random variables and consequently ignores the correlation of these two correlated random variables. In addition, in region snakes regime, the evolution of the snake contour is not dependent of local pixels near the contour edge as in classic snake active contours [46

46. M. Kass, A. Witkin, and D. Terzopoulus, “Snakes: Active contour models,” Int. J. Comput. Vision 1, 321–331 (1987). [CrossRef]

], but rather, the evolution process is based on the statistical distribution of the complex amplitude inside and outside the snake contour. The latter fact facilitates segmentation of objects even when they are out-of-focus or images with jagged object boundaries.

3.1 Methodology

Computational reconstruction of the SEOL hologram obtained from the interference pattern formed on the CCD involves the inverse Fresnel transform. As a result, the reconstructed holographic images have complex-valued pixels, thus each pixel si = αi exp(i ) is a complex number with αi and φi for its magnitude and phase, respectively. The target and background pixels are assumed to follow two independent bivariate normal distributions. Each distribution has a probability density function which consists of two dependent normal random variables α and φ as for magnitude and phase, respectively. The original bivariate normal probability density function is not separable directly. However, conditioning one of the variables (α ) on the second variable (φ), one can obtain the separated form of bivariate normal probability distribution function as follows [47

47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).

]:

fu(αi,φi)=1σφuΦ(φiμφuσφu)×1σαφuΦ(αiμαφuσαφu),
(4)

where Φ(x)= (2π)-1/2 exp(-x 2/2) denotes the standard normal distribution. The script u∊{t, b} is used to discriminate the target and background respectively. Also, let parameter vector Θm ={μαu ,μφu , σαu ,σσu ,ρu } be the distribution parameters of either the target or the background. Since the separation of two random variables in Eq. (4) is made possible by conditioning α on φ, the corresponding conditional mean and variances can be used for a as follows [47

47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).

]:

μαφu=μαu+ρuσαu(φμφu)σφu,σαφu2=σα2(1ρu2).
(5)

Let w = {wi |i∊[1,N]} be a binary window model that determines the support of thetarget such that wi = 1 for the pixels of target and wi =0 elsewhere, and N is the total number of image pixels. Now the image can be represented as the addition of disjoint target complex pixels (a) inside the binary window w, and background complex pixels (b) outside the window [48

48. B. Javidi and J. Wang, “Limitations of the classic definition of the signal-to-noise ratio in matched filter based optical pattern recognition,” Appl. Opt. 31, 6826–6829 (1992). [CrossRef] [PubMed]

,49

49. B. Javidi and J. Wang, “Optimum distortion invariant filters for detecting a noisy distorted target in background noise,” J. Opt. Soc. Am. 12, 2604–2614 (1995). [CrossRef]

]. Thus, we adopt the one dimensional representation of the image as: si =a i wi + b i[1-wi ].

With these notations the problem of segmentation reduces to finding an optimal choice for w that maximizes the hypothesis probability P[H W|s] (i.e. the most likely window w of the target), where H w represents the hypothesis that w is the target support. Using the Bayes rule and considering an equally likely hypothesis scenario, the maximization of a posteriori hypothesis probability is analogous to maximizing the conditional probability which is expressed as the likelihood function for H w as following:

P(SHw,Θ)=i=1Nft(αi,φi)·wi×i=1Nfb(αi,φi)·(1wi),
(6)

where vector Θ = {Θtb} contains all the parameters needed to characterize the bivariate normal distributions of the target and background pixels. Since no prior knowledge of the target and background is assumed, these parameters should be estimated. Thus maximum likelihood estimator has been utilized as following:

μ̂αu=1Nu(w)i∊Ωuαi,μ̂φu=1Nu(w)i∊Ωuφi,
σ̂αu={1Nu(w)i∊Ωu(αiμαu)2}12,σ̂φu={1Nu(w)i∊Ωu(φiμφu)2}12,
ρ̂u=1Nu(w)σαuσφui∊Ωu(αiμαu)(φiμαu),
(7)

where Nu (w) denotes the number of pixels in the target or background window according to the script u. By substituting the bivariate joint probability distribution function in Eq. (4) into Eq. (6) and using Eqs. (5) and (7), one can see that maximization of Eq. (6) is analogous to minimization of the following criterion [43

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

]:

J(SHw,Θ)=Nt(w)log(σ̂φtσ̂αt1ρ̂t2)+Nb(w)log(σ̂φbσ̂αb1ρ̂b2).
(8)

Minimization of Eq. (8) leads to maximization of the likelihood function in Eq. (6), thus, this optimization forces the snake polygon (representing H w) to evolve in such a way to find the statistically optimal H w for the target support.

3.2 Stochastic optimization algorithm

In order to carry out the optimization, a simple stochastic algorithm is employed. The basic idea is to model the snake by a polygon with l constant points and iteratively deform the polygon nodes in such a way that the optimization criterion in Eq. (8) decreases at every iteration. This procedure is illustrated in the following diagram:

Fig. 3: Schematic diagram of the primary stochastic minimization algorithm.

Several techniques such as multi-resolution snake, adaptive node selection and direction inertia are presented in [43

43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

] to increase the robustness and convergence speed of the above algorithm. The algorithm is terminated when no more contraction can be imposed on J(s,w) for long consecutive iterations.

4. 3D complex morphology-based recognition of microorganisms

In this section, we review 3D complex morphology-based recognition of microorganisms [28–32

28. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005). [CrossRef] [PubMed]

]. 3D complex morphology pattern is defined as the complex amplitude of computationally reconstructed holographic images at arbitrary depths. In the following subsections, we present detailed processes of the recognition technique.

4.1 Feature extraction by means of Gabor-based wavelets

It is more efficient to remove unnecessary background for recognition before processing the microorganisms. Threshold-based segmentation is performed using histogram analysis [28–32

28. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005). [CrossRef] [PubMed]

], however more advanced methods such as bivariate region snake in Section 3 can be applied. After the segmentation, images are decomposed and feature vectors are extracted by Gabor-based wavelets. The Gabor-based wavelets have the form of a Gaussian envelope modulated by the complex sinusoidal function [50–52

50. J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. 2, 1160–1169 (1985). [CrossRef]

]. The impulse response (or kernel) of the Gabor-based wavelet in 2D discrete domain is defined as:

guv(X)=Kuv2σ2exp(Kuv2X22σ2)[exp(jKuv·x)exp(σ22)],
(9)

where x is a position vector; k is a wave number vector; and σ is proportional to the standard deviation of the Gaussian envelope. k uv is defined as: k = k 0u[cosϕ ν sinϕ ν]t, k 0u =k 0/δ u-1, ϕ ν=[(ν-1)/V]π, u = 1,…,U, and ν = 1,…,V, where k 0u is the magnitude of the wave number vector; ϕ ν is the azimuth angle of the wave number vector; k 0 is the maximum carrier frequency of the Gabor kernels; δ is the spacing factor in the frequency domain; U and V are the total numbers of decompositions along the radial and tangential axes, respectively; and the superscript t denotes the matrix transpose.

By changing the magnitude and direction of the vector k , we can scale and rotate the Gabor kernel to make self-similar forms. The size of the Gaussian envelope is the same in the x and y directions which is proportional to σ|k |. The second term in the square bracket in (9), exp(-σ 2/2), subtracts the DC value so that it has a zero mean response [51

51. T. S. Lee, “Image representation using 2D Gabor wavelets,” IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959–971 (1996). [CrossRef]

]. The Gabor-based wavelets perform band-pass filtering where spatial and orientation frequency bandwidths depend on the size of the Gaussian envelope. The carrier frequency of the band pass filter is determined by k . The Gaussian-envelope in the Gabor-based wavelet achieves the minimum space-bandwidth product [50

50. J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. 2, 1160–1169 (1985). [CrossRef]

]. It is suitable to extract local features with high frequency bandwidth (small u) kernels and global features with low frequency bandwidth (large u) kernels.

Let y be the filtered output (Gabor coefficients) of the image Ô after it is 2D convolved with the Gabor kernel g :

yuv(x,y)=x′=1Nxy′=1Nyguv(xx′,yy′)Ô(x′,y′),
(10)

where Ô is the complex amplitude of the segmented image; and Nx and Ny are the size of the image in the x and y directions, respectively. The magnitude of Ô is normalized between 0 and 1. A rotation-invariant vector is defined at each pixel. The rotation-invariant property can be achieved by adding up all the Gabor coefficients along the tangential axes of the frequency domain. Thus, we can define the U-dimensional rotation-invariant node vector as:

V[x]=[ν=1Vy1ν[x]ν=1Vy[x]]t.
(11)

4.2 Graph matching technique

The rigid graph matching (RGM) technique [53–56

53. M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEE Trans. Comput. 42, 300–311 (1993). [CrossRef]

] measures the similarity of 3D complex morphology between a reference microorganism and unknown input samples. The graph is defined as a set of nodes associated in the local area. Let R and S be two identical and rigid graphs placed on the reference image O r and unknown sample image O s, respectively. The location of the reference graph R is pre-determined by the translation vector P r and the clockwise rotation angle θr . Position vectors of K nodes in the graph R are computed as:

xk(Pr,θr)=Aθr(xkoxco)+Pr,Aθ=[cosθsinθsinθcosθ],k=1,K,
(12)

where xko and xco are, the position vectors of the node k and the center of the graph without any translation and rotation, respectively; and K is the total number of the nodes in the graph.

ΓRS(Ps,θs)=1Kk=1KVR[Xk(Pr,θr)],VS[Xk(Ps,θs]VR[Xk(Pr,θr)]VS[Xk(Ps,θs)],
(13)

where 〉η〈 stands for the inner product; and v R[x k(P r,θr )] and v S[x k(P s,θs )] are the node vectors of the graph R in the reference image and the graph S in the unknown input image, respectively. We adopt a difference cost function to improve the discrimination capability between two graphs R and S. The difference cost is defined as the absolute value of the difference between two vectors:

CRS(Ps,θs)=1Kk=1KVR[Xk(Pr,θr)]VS[Xk(Ps,θs)].
(14)

The local area which is covered by the graph S is identified with the reference shape if the following two conditions are satisfied:

ΓRS(Ps,θ̂s)>αΓandCRS(Ps,θ̂s)<αc,
(15)

where α Γ and αc are thresholds for the similarity function and the difference cost, respectively; and θ^s is obtained by searching the best matching angle to maximize the similarity function at the position vector p s. In this subsection, we utilize graph matching technique for the identification of unknown input objects. However, a training process can be considered as a subsequent stage after the graph matching. In the case of microorganisms, automatic selection of training data by means of the graph matching might be useful when biological samples overlap and/or cluster which make it difficult to select individual objects. More detailed scheme of the automatic feature selection with the training and decision rules can be found in [29

29. S. Yeom, I. Moon, and B. Javidi, “Real-time 3D sensing, visualization and recognition of dynamic biological micro-organisms,” Proceedings of IEEE 94, 550–566 (2006). [CrossRef]

,30

30. S. Yeom and B. Javidi, “Three-dimensional recognition of microorganisms,” J. Bio. Opt. 11, 024017-1~8 (2006). [CrossRef]

].

5. Shape-independent recognition approach

We apply statistical algorithms to the 3D recognition system to make it independent of the shape and profile of the microorganisms [33

33. I. Moon and B. Javidi, “Shape-tolerant three-dimensional recognition of microorganisms using digital holography,” Opt. Express 13, 9612–9622 (2005). [CrossRef] [PubMed]

]. The shape-independent recognition approach may be suitable for recognizing 3D microorganisms such as bacteria and biological objects that do not have well defined shapes or profiles. For example, they may be simple, unicellular and branched in their morphological traits. It could also be applied to cells that vary in shape and profile rapidly. For the shape-independent approach, a number of sample segments are randomly extracted from the segmented 3D image of a microorganism. These samples are processed using statistical cost functions to classify the microorganism. The sample distributions for the difference of parameters between the sample segment features of the reference and input images are calculated using statistical estimation.

First, we reconstruct the 3D microorganism as a volume image from a SEOL digital hologram corresponding to a reference microorganism. Then, we randomly extract N pixels in the reconstructed 3D image. We repeat the above steps for S specimens of the same class of microorganism. Therefore, each sample segment consists of N by S complex values. We denote each pixel value in the trial sample patch as XNnS [See Fig. 4]. We refer to each reconstruction plane of the 3D volume as “page.” Now, we change the locations of each sample in a given page, and repeat the above steps n times.

Similarly, we record the SEOL digital hologram of an unknown input microorganism and then restore the original input image. Next, we randomly extract N pixels n times in the unknown reconstructed 3D image and repeat the above steps about S specimens of the same microorganism. Each sample segment consists of N by S complex values. We have a total of n of these segments as well. We denote each pixel value in the trial sample patch as YNnS [See Fig. 4]. For classification and recognition of biological microorganisms, we use the statistical inference for the equality of the locations and dispersions between reference sample data and unknown sample data using a statistical sampling and estimation theory.

We assume that random variables XNS and YNS which are elements inside the reference and unknown input sample segment are statistically independent with identical population distribution f(X) and f(Y), respectively. Also, let XNS be independent of YNS . It is noted that the reconstructed image from a SEOL hologram consists of complex values, so we perform two separate univariate hypothesis testing about the real part and the imaginary part, respectively.

From the histogram analysis of the real and imaginary parts of the reconstructed 3D images from the SEOL digital hologram, we may consider that the random variables (real or imaginary parts of the reconstructed image) in the sampling segment nearly follow Gaussian distribution. For checking the normality of sample data, the Ch-square goodness of fit test [57

57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).

] can be performed.

For comparing the variance of two sample segments between reference and input, if the sample data are normally distributed, the following F-test can be used [47

47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).

,57

57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).

]:

F(NX1),(NY1)={NYNY1}V[Y]{NXNX1}V[X]=V̂[Y]V̂[X],
(16)

where Nx and NY are the number of reference and input sampling segment, respectively; V[·] denotes the variance; and [·] is unbiased sample variance. If the sample data are not normally distributed, we use the following Levene’s test [57

57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).

] by performing an analysis of variance on the absolute deviations of the data from their respective sample:

W=(NX+NY2)[Nx(ZX̅Z̅)2+NY(ZY̅Z̅)2]j=1NX(ZXjZ̅X)2+j=1NY(ZYjZ̅Y)2,
(17)

where Z J = |Y j - j|; jis the sample mean of the reference or unknown input; Z¯ is the sample means of the Z j; and Z¯ is the overall mean of the Z j .

Fig. 4. The design procedure for shape independent 3D recognition of biological microorganisms. The sampling segments are extracted in the reconstructed 3D image using SEOL digital hologram.

For comparing the means of two sample segments between reference and input image, if the sample data are normally distributed, the following t-test can be used [47

47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).

,57

57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).

]:

T=1VP̅E[X]E[Y]{(NX)1+(NY)1}12,
(18)

where p is the pooled estimator of the variance of actual population; and E[·] denotes the expectation operator. If the sample data are not normally distributed, we use the following Mann-Whitney test [57

57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).

] that does not require assumptions about the shape of the underlying distributions by performing an analysis of median from their respective sample:

U=NXNY+NX(NX+1)2RX,
(19)

where the statistic U is corresponding to the reference image; and R x is the rank sum of the sample data of the reference image. If the sample size is greater than 8, it is known that the statistic U is approximately normally distributed, so Eq. (19) can be Z = (U-μ U)/σ U, where μ U and σ U are mean and standard deviation of the statistic U, respectively.

We also perform Kolmogorov-Smirnov Test (K-S Test) [57

57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).

] as a distribution-free test for comparison of two populations. The statistic is given by:

J=max<u<{FX(u)FY(u)},
(20)

where F (u) is the empirical cumulative distribution functions (CDF) of two samples of data.

If the p-value calculated from the statistical test in Eqs. (16)–(20) is less than the desired value at a level of significance α, we can reject the null hypothesis H 0. It is noted that H 0 indicates that there is no statistically significant difference between dispersions (variance), locations (mean) and distribution functions at a given confidence level.

6. Shift-invariant recognition approach

From SEOL digital holographic microcopy, we can reconstruct cross-section images of biological microorganisms along longitudinal direction. These facts enable us to obtain focused images of microorganisms located at the different reconstruction distance as shown in Fig. 5. Applying correlation techniques to the volumetric intensity image of unknown input microorganisms and a reference intensity image, we can recognize the unknown input and find the focused image of it, respectively [24

24. E. Tajahuerce, O. Matoba, and B. Javidi, “Shift-invariant three-dimensional object recognition by means of digital holography,” Appl. Opt. 40, 3877–3886 (2001). [CrossRef]

]. These allow the recognition system to be shift-invariant.

Fig. 5. Coordinate system for the SEOL digital hologram and the shift-invariant 3D recognition of microorganisms.

The cross-correlation function Corr(x, y, p) between the reference image and the unknown input section images is given by:

Corr(x,y,p)=FT1{[UX(x,y,p)]×FT*[UY(x,y)]},
(21)

where FT denotes Fourier transformation; p is a page number; and UX(x, y, p) and UY (x,y) are amplitude filed distributions of the unknown input and reference, respectively.

7. Experiments results

7.1 Segmentation results

In this section some experimental results of the bivariate region snake segmentation described in Section 3 are presented. Computationally reconstructed images of several microorganisms from SEOL holographic microscopy are used. As discussed earlier, bivariate jointly distributed region snake incorporates both the magnitude and phase information simultaneously since the holographic images are complex, however, the magnitude images are used for illustration in the figures hereafter. The snake contour is modeled as a polygon with l vertexes and the binary window function, w, is set to 1 inside and 0 outside the polygon. The images in the first column in Fig. 6(a) shows two different diatom algae over which the snakes are initialized with 4 nodes. Although the initial contour is completely different from target boundaries the bivariate region snake is able to capture the microorganism body after approximately 1500 iterations [See Fig. 6(b)]. As it can be seen in Fig. 6(c), the optimization traces obtain a reasonable slope and show very slight progress after the 1500th iteration which can be an indicator to stop the iterations.

Fig. 6. (a) Magnitude images of two diatom algae on which the snake is initialized with 4 points, (b) the final segmentation carried out by bivariate region snake after 1500 iterations, (c) the trace of the optimization criterion during the iteration.

Fig. 7. (a) Magnitude image of an out-of-focus sphacelaria alga reconstructed from a SEOL hologram and the 5 point snake initialization,. (b) the segmented microorganism, (c) the trace of the optimization criterion during the iteration.

Fig. 8. (a) Magnitude image of computationally reconstructed SEOL hologram of diatom algae, (b) segmentation result, (c) the final binary windows w for the target, (d) optimization profile showing mutations as small peaks.

7.2 Experimental results for3D morphology-based recognition

To test the recognition performance, we generate 9 holograms for sphacelaria alga and tribonema aequale alga samples, respectively. We denote 9 sphacelaria alga samples as A1,…A9 and 9 tribonema aequale alga samples as B1,…,B9. To test the robustness of the proposed algorithm, the position of the CCD is changed during the experiments resulting in different depths for the focused reconstruction image. The samples A1-A3 are reconstructed at 180 mm, A4-A7 are reconstructed at 200 mm, and A8 and A9 are reconstructed at 300 mm and all samples of tribonema aequale (B1~B9) are reconstructed at 180 mm for the focused images.

Fig. 9. Each image is composed of different components of the node vector in Eq. (11) when (a) u = 1, (b) u = 2, (c) u = 3.
Fig. 10. Recognition of sphacelaria alga, (a) reference sample A1 with the graph R, (b) graph matching result of unknown input sample A9, (c) number of detections, (d) maximum similarity and minimum difference cost, (a) and (b) are presented by contrast reversal for better visualization.

Magnitude and phase parts of computationally reconstructed complex images are cropped and reduced into images with 256 × 256 pixels by the reduction ratio 0.25. During the segmentation, we assume less than 20% of the lower magnitude region of the complex image is occupied by the microorganisms and the magnitude of the microorganisms is less than 45% of the background diffraction field. The parameters for Gabor-based wavelets are set as: σ = π, k 0 = π/4, δ= s√2 , U = 3, and V = 6. Figure 9 shows the node vector components when u = 1, 2, and 3. Only real parts of y in Eq. (10) are used for the feature extraction.

To recognize two filamentous objects which have different thicknesses and distributions, we select two different reference graphs and place them on the sample A1 and B1. A rectangular grid is selected as a reference graph for the sphacelaria alga which shows regular thickness in the reconstructed images. The reference graph is composed of 25×3 nodes and the distance between nodes is 4 pixels in the x and y directions. Therefore, the total number of nodes in the graph is 75. The reference graph R is placed with p r = [81 75]t and θr =135° in the sample A1 as shown in Fig. 10(a). The threshold α Γ which is set at 0.6 is only used. The threshold is selected heuristically to produce better results.

Fig. 11. Recognition of tribonema aequale alga, (a) reference sample B1 with the graph R, (b) graph matching result of unknown input sample B2, (c) number of detections, (d) maximum similarity and minimum difference cost, (a) and (b) are presented by contrast reversal for better visualization.

To recognize tribonema aequale alga, a wider rectangular grid is selected to identify its thin filamentous structure. The reference graph is composed of 20×3 nodes and the distance between nodes is 4 pixels in the x direction and 8 pixels in the y direction, therefore, the total number of nodes in the graph is 60. The reference graph R is placed with p r = [142 171]t and θr = 90° in the sample B1 as shown in Fig. 11(a). The thresholds α Γ and αc are set at 0.8 and 0.7, respectively. Figure 11(b) shows another sample (B2) of the true-class input image with the graph matching results. The reference shapes are detected 30 times along the thin filamentous object. Figure 11(c) shows the number of detections for 9 true-class and 9 false-class microorganisms. The detection number for the true-class samples B1~B9 varies from 6 to 49. Four false detections are found in one of the false-class samples A8. Figure 11(d) shows the maximum similarity and the minimum difference cost for all samples.

7.3 Experimental results for shape-tolerant and shift-invariant 3D microorganism recognition

In this subsection, we conduct statistical estimation and inference to test the performance of our shape tolerant 3D microorganism recognition system using SEOL digital holography. First, 100 trial sampling segments are produced by randomly selecting the pixel values in the segmented oscillatoria bacteria 3D image as the reference microorganism, where we change the size of each trial sampling segment 30, 100, and, 200. We apply Sobel edge-detection method to the segmented 3D images.

Fig. 12. The magnified intensity images at the distance d = 270 mm of microorganisms by use of a 100 × microscope objective, (a) oscillatoria bacteria, (b) diatom alga.

Similarly, a number of sampling segments are randomly selected in the oscillatoria bacteria 3D image as the true-class inputs and in the diatom alga image as the false-class inputs. We produced 100 true-class and 100 false-class input sampling segments, respectively. The reference and input images are reconstructed at distance d = 270 mm as shown Fig 12.

Table 1(a) shows experimental results of F-test for comparing the variances between reference and unknown input. As shown in Table 1(a), it is noted that the average p-value for the true-class input are around 0.4534 and 0.5166 in the real and imaginary parts at the sample size 100, respectively and for the false-class input are around 0.0080 and 0.0226. Table 1(b) shows the results of the Levene's test for the difference of scale parameters between the reference and input. It is noted that average p-value for the true-class input are around 0.7068 and 0.6856 in the real and imaginary parts at the sample size 100, respectively, and for the false-class input are around 0.0688 and 0.0156, respectively.

Table 1. (a) F-test (normal distribution), (b) Levene's test (any continuous distribution) for the equality of the dispersion parameter between two sampling segments versus a sample size.

table-icon
View This Table
| View All Tables
Levene’s Test (any continuous distribution)
Real part Imaginary part
True Class False Class True Class False Class
Test statistic p-value Test statistic p-value Test statistic p-value Test statistic p-value
Sample size 30
1.7120 0.3686 1.5140 0.4060 0.5240 0.5440 2.0960 0.3700
Sample size 100
0.2160 0.7068 5.9300 0.0688 0.3000 0.6856 8.2300 0.0156
(b)

Table 2. (a) T-test (normal distribution) for the equality of the location parameter, (b) Mann-whitney test (nonparametric) for comparing the median between two sampling segments versus a sample size.

table-icon
View This Table
| View All Tables
Mann-Whitney Test (any continuous distribution)
Realpart Imaginary part
True Class (p-value) False Class (p-value) True Class (p-value) False Class (p-value)
Sample size 30
0.5217 0.4534 0.5100 0.5091
Sample size 100
0.5143 0.3839 0.4567 0.5075
Sample size 200
0.5336 0.3092 0.5626 0.5067
(b)

Table 3. K-S test (distribution-free) for comparing the cumulative distribution between two sampling segments versus a sample size.

table-icon
View This Table
| View All Tables

The experimental result of T-test for comparing the mean between the reference and input is shown in the Table 2(a). It is noted that average p-value for the true-class input are around 0.5530 and 0.5677 in the real and imaginary parts at the sample size 200, respectively and for the false-class input are around 0.2141 and 0.5009, respectively. Table 2(b) shows the results of nonparametric test for the difference of the median between the reference and input. It is noted that average p-value for the true-class input are around 0.5336 and 0.5626 in the real and imaginary parts at the sample size 200, respectively and for the false-class input are around 0.3092 and 0.5067, respectively. Table 3 shows the experimental result of distribution-free test for a comparison of two populations. The average maximum difference between the cumulative distributions for the true-class input are around 0.0710 and 0.0770 in the real and imaginary parts at the sample size 200, respectively and for the false-class input are around 0.1320 and 0.1240, respectively.

We calculate the correlation coefficient between the diatom alga image reconstructed at d = 400 mm as reference and unknown input image to test the shift-invariance of our recognition system, where we move the longitudinal position of microorganism using xy-translation stage. As shown in Fig 13, we obtain the correlation peak at the reconstruction distance around 270 mm for the true-input class, but for the false-class input the correlation value is less than around 0.1.

Fig. 13. Experimental result of correlation coefficient calculated between the intensity image of reference and the volume intensity image of unknown input versus reconstruction distance.

7.4 Discussion of real- time processing

For real-time application, computational complexity should be considered. Since SEOL holography requires a single-exposure, real-time sensing is possible. For computational reconstruction of holographic images, the computational time is of the same order of fast Fourier transformation (FFT) which is O(N) = Nlog2 N, where N is the total number of pixels in the holographic image. Therefore, with high speed electronics, it is possible to have realtime detection.

For the morphology-based recognition, the computational time of the Gabor filtering is of the same order as FFT. For the graph matching, the computational time depends on the shape and the size of the graph, the dimension of the feature vector, and searching steps for the translation vector and the rotation angle. Since the most time consuming operation results from searching the graphs, that is O(N) = N 2, the system requires quadratic computational complexity.

For the shift-invariant recognition approach, the cross-correlation function can be obtained with the same order of FFT. Therefore, real-time processing can be achieved by developing a specialized hardware or parallel processing.

8. 3D visualization and recognition using integral imaging

In this section, we present a brief discussion of 3D sensing and visualization of biological microorganisms using integral imaging (II) [19

19. B. Javidi and F. Okano, eds., Three-dimensional television, video, and display technologies, (New York, Springer,2002).

,58–70

58. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

] which can be used for identification using the algorithms presented in this paper. In contrast to holography, II requires incoherent illumination to record the information of a 3D scene. II is a promising technique based on recording the multi-view directional information of 3D scene. A micro-lens array projects the 3D scene onto a detector array generating a set of elemental images. Captured micro-objects have different perspective and location information in each elemental image. The scene in II can be illuminated under ambient or incoherent light. Reconstruction is the reverse of the sensing process. In computational reconstruction, the elemental images are numerically projected through a virtual lens array to reproduce the original 3D object by means of geometrical ray projection method [66

66. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]

68

68. S. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579 – 4588 (2004). [CrossRef] [PubMed]

]. Therefore, volumetric scenes can be reconstructed at different longitudinal distances. Computational reconstruction of II can improve the image quality degradation caused by optical devices [68

68. S. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579 – 4588 (2004). [CrossRef] [PubMed]

]. There are several advantages in 3D object recognition using II. One advantage is that II allows multiple perspectives imaging by a single shot. The depth and perspective information in the multiple perspective imaging can be utilized to build a compact 3D recognition system. The other advantage is that II is a passive sensor using incoherent light. Indeed, computational II reconstructs volumetric scenes at different depths, and we are able to recognize objects of interest located at different longitudinal distances.

Fig. 14. Experimental setup for 3D II recording of microorganisms

The experimental system uses a micro-lens array, and a pick-up camera as shown in Fig. 14. A filamentous microorganism, sphacelaria alga with a size of 50~100 μm is used in the experiments. A set of elemental images is captured with one exposure. Reconstructed microorganisms images at different depths (d) are shown in Fig. 15.

Fig. 15. Reconstructed volumetric images of sphacelaria alga at different depths, (a)d = 214 μm, (b)d = 290 μm.

9. Summary and conclusions

Automatic recognition of biological microorganisms is very challenging because of their strong resemblance and dynamic nature such as moving, growing, and varying in size and shape. There are broad applications of real-time 3D surveillance and identification of dynamic microscopic bio-organic events. This paper is an overview of techniques for 3D sensing, imaging, segmentation, and recognition of biological microorganisms including SEOL holographic microscopy. 3D sensing and reconstruction by means of SEOL holographic microscopy is suitable for inspection of dynamic biological microscopic events. The sensing stage is robust to dynamic movement of microscopic objects and environmental conditions as compared with the multiple-exposure phase-shifting digital holography. The setup of SEOL digital holography is simpler than off-axis holography and more robust to object size and scale variations.

A number of approaches are presented for the recognition of the biological microorganisms. Segmentation extracts regions of interest for further processing. A number of techniques are discussed for segmentation of biological microorganisms sensed by SEOL holographic microscopy. In particular, bivariate jointly distributed region snake is developed as a statistical segmentation method maximizing the conditional probability of the target hypothesis assuming the joint Gaussian distribution for the complex amplitude pixels.

One 3D recognition approach examines the simple morphological traits comprising the complex amplitude of biological microorganisms. Feature extraction by Gabor-based wavelets and graph matching technique are used to localize the specific 3D shape of reference microorganisms. A scheme of automated feature vector selection is claimed. Experimental results for graph matching technique are presented.

Shape-tolerant 3D recognition of microorganisms using the statistical cost functions and inference is presented. A number of sampling segments are randomly extracted from the microorganism and processed with cost functions and statistical inference theory. By investigating the Gaussian property of the holographically reconstructed images of microorganisms, we are able to distinguish the sampling segments of the true-class object in the database from the different classes of microorganisms presented at the input.

Using SEOL digital holographic microcopy, we can numerically reconstruct focused sectional images of biological microorganisms along longitudinal direction. We have shown by experiments that spatially shift-invariant recognition of biological microorganisms can be obtained throughout the reconstructed volumetric image of input biological scene.

We have presented several different approaches and image processing techniques based on SEOL holography for 3D segmentation and recognition of biological microorganisms. Although these techniques are applied separately to different classes of microorganisms, the combination of these techniques may enhance the performance for the sensing, segmentation, and identification of unknown microorganisms.

Acknowledgments

This work has been supported by Defense Advanced Research Projects Agency (DARPA). We wish to thank Dr. Seung-Hyun Hong for his assistance.

References and links

1.

The largely forgotten Influenza in 1918, a. k. a. “Spanish Flu” or “La Grippe” killed an estimated 40 million people worldwide, and an estimated 600,000 in the USA. It infected an estimated 20% of the world population. See Alfred Crosby, “America’s Forgotten Pandemic: The Influenza of 1918,” (Cambridge University Press, Cambridge, 1989).

2.

http://www.pbs.org/wgbh/amex/influenza/

3.

J. W. Lengeler, G. Drews, and H. G. Schlegel, Biology of the prokaryotes, (New York, Blackwell science,1999).

4.

M. G. Forero, F. Sroubek, and G. Cristobal, “Identification of tuberculosis bacteria based on shape and color,” Real-time imag. 10, 251–262 (2004). [CrossRef]

5.

J. Alvarez-Borrego, R. R. Mourino-Perez, G. Cristobal-Perez, and J. L. Pech-Pacheco, “Invariant recognition of polychromatic images of Vibrio cholerae 01,” Opt. Eng. 41, 827–833 (2002). [CrossRef]

6.

A. L. Amaral, M. da Motta, M. N. Pons, H. Vivier, N. Roche, M. Moda, and E. C. Ferreira, “Survey of protozoa and metazoa populations in wastewater treatment plants by image analysis and discriminant analysis,” Environmentrics 15, 381–390 (2004). [CrossRef]

7.

S.-K. Treskatis, V. Orgeldinger, H. wolf, and E. D. Gilles, “Morphological characterization of filamentous microorganisms in submerged cultures by on-line digital image analysis and pattern recognition,” Biotechnol. Bioeng. 53, 191–201 (1997). [CrossRef] [PubMed]

8.

T. Luo, K. Kramer, D. B. Goldgof, L. O. Hall, S. Samson, A. Remsen, and T. Hopkins, “Recognizing plankton images from the shadow image particle profiling evaluation recorder,” IEEE Trans. Syst. Man. Cybern. Part B 34, 1753–1762 (2004). [CrossRef]

9.

A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, “Design and application of quadratic correlation filters for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004). [CrossRef]

10.

F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. 43, 315–323 (2004). [CrossRef] [PubMed]

11.

A. K. Jain, Fundamentals of digital image processing, (Prentice Hall,1989).

12.

R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification 2nd , (NewYork, Wiley Interscience,2001).

13.

C.M. Bishop, Neural networks for pattern recognition, (New York, Oxford University Press,1995).

14.

B. Javidi and P. Refregier, eds., Optical pattern recognition, (SPIE,1994).

15.

H. Kwon and N. M. Nasrabadi, “Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens. 43, 388–397 (2005). [CrossRef]

16.

F. Sadjadi, ed., Milestones in performance evaluations of signal and image processing systems, (SPIE Press,1993).

17.

P. Refregier, V. Laude, and B. Javidi, “Nonlinear joint transform correlation: an optimum solution for adaptive image discrimination and input noise robustness,” J. Opt. Lett. 19, 405–407 (1994).

18.

F. Sadjadi, “Improved target classification using optimum polarimetric SAR signatures,” IEEE Trans. Aerosp. Electron. Syst. 38, 38–49 (2002). [CrossRef]

19.

B. Javidi and F. Okano, eds., Three-dimensional television, video, and display technologies, (New York, Springer,2002).

20.

B. Javidi, ed., Image Recognition and Classification: Algorithms, Systems, and Applications, (New York, Marcel Dekker,2002). [CrossRef]

21.

B. Javidi and E. Tajahuerce, “Three dimensional object recognition using digital holography,” Opt. Lett. 25, 610–612 (2000). [CrossRef]

22.

O. Matoba, T. J. Naughton, Y. Frauel, N. Bertaux, and B. Javidi, “Real-time three-dimensional object reconstruction by use of a phase-encoded digital hologram,” Appl. Opt. 41, 6187–6192 (2002). [CrossRef] [PubMed]

23.

Y. Frauel and B. Javidi, “Neural network for three-dimensional object recognition based on digital holography,” Opt. Lett. 26, 1478–1480 (2001). [CrossRef]

24.

E. Tajahuerce, O. Matoba, and B. Javidi, “Shift-invariant three-dimensional object recognition by means of digital holography,” Appl. Opt. 40, 3877–3886 (2001). [CrossRef]

25.

B. Javidi and D. Kim, “Three-dimensional-object recognition by use of single-exposure on-axis digital holography,” Opt. Lett. 30, 236–238 (2005). [CrossRef] [PubMed]

26.

D. Kim and B. Javidi, “Distortion-tolerant 3-D object recognition by using single exposure on-axis digital holography,” Opt. Express 12, 5539–5548 (2005). [CrossRef]

27.

S. Yeom and B. Javidi, “Three-dimensional object feature extraction and classification with computational holographic imaging,” Appl. Opt. 43, 442–451 (2004). [CrossRef] [PubMed]

28.

B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005). [CrossRef] [PubMed]

29.

S. Yeom, I. Moon, and B. Javidi, “Real-time 3D sensing, visualization and recognition of dynamic biological micro-organisms,” Proceedings of IEEE 94, 550–566 (2006). [CrossRef]

30.

S. Yeom and B. Javidi, “Three-dimensional recognition of microorganisms,” J. Bio. Opt. 11, 024017-1~8 (2006). [CrossRef]

31.

S. Yeom, I. Moon, and B. Javidi, “Two approaches of 3D microorganism recognition using single exposure online digital holography,” in F. Sadjadi and B. Javidi (eds.), Physics of Automatic Target Recognition, (Springer,2006).

32.

B. Javidi, I. Moon, and S. Yeom, “3D microorganism sensing, visualization and recognition using single exposure on-line digital holography,” Optics and Photonics News 17, 16–21 (2006). [CrossRef]

33.

I. Moon and B. Javidi, “Shape-tolerant three-dimensional recognition of microorganisms using digital holography,” Opt. Express 13, 9612–9622 (2005). [CrossRef] [PubMed]

34.

S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003). [CrossRef] [PubMed]

35.

S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13, 9310–9330 (2005). [CrossRef] [PubMed]

36.

T. Kreis, ed., Handbook of Holographic Interferometry, (Wiley, VCH,2005).

37.

J. W. Goodman, Introduction to Fourier Optics 2nd, (Boston, McGraw Hill,1996). [PubMed]

38.

J. W. Goodman and R. W. Lawrence, “Digital image holograms,” Appl. Phys. Lett. 11, 77–79 (1967). [CrossRef]

39.

I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef] [PubMed]

40.

P. Ferraro, S. Grilli, D. Alfieri, S. D. Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano, “Extended focused image in microscopy by digital holography,” Opt. Express 13, 6738–6749 (2005). [CrossRef] [PubMed]

41.

T. Zhang and I. Yamaguchi, “Three-dimensional microscopy with phase-shifting digital holography,” Opt. Lett. 23, 1221–1223 (1998). [CrossRef]

42.

B. R. Brown and A. W. Lohmann, “Complex spatial filtering with binary masks,” Appl. Opt. 5, 967–969 (1966). [CrossRef] [PubMed]

43.

M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).

44.

O. Germain and P. Refregier “Optimal snake-based segmentation of a random luminance target on a spatially disjoint background,” Opt. Lett. 21 (1996). [CrossRef] [PubMed]

45.

C. Chesnaud, V. Page, and P. Refregier , “Improvement in robustness of the statistically independent region snake-based segmentation method of target-shape tracking,” Opt. Lett. 23, 488–490 (1998). [CrossRef]

46.

M. Kass, A. Witkin, and D. Terzopoulus, “Snakes: Active contour models,” Int. J. Comput. Vision 1, 321–331 (1987). [CrossRef]

47.

N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).

48.

B. Javidi and J. Wang, “Limitations of the classic definition of the signal-to-noise ratio in matched filter based optical pattern recognition,” Appl. Opt. 31, 6826–6829 (1992). [CrossRef] [PubMed]

49.

B. Javidi and J. Wang, “Optimum distortion invariant filters for detecting a noisy distorted target in background noise,” J. Opt. Soc. Am. 12, 2604–2614 (1995). [CrossRef]

50.

J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. 2, 1160–1169 (1985). [CrossRef]

51.

T. S. Lee, “Image representation using 2D Gabor wavelets,” IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959–971 (1996). [CrossRef]

52.

J. G. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst. for Video. Tech. 14, 21–30, (2004). [CrossRef]

53.

M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEE Trans. Comput. 42, 300–311 (1993). [CrossRef]

54.

R. P. Wurtz, “Object recognition robust under translations, deformations, and changes in background,” IEEE Trans. Pattern. Anal. Mach. Intell. 19, 769–775 (1997). [CrossRef]

55.

B. Duc, S. Fischer, and J. Bigun, “Face authentification with Gabor information on deformable graphs,” IEEE Trans. Image Process. 8, 504–516 (1999). [CrossRef]

56.

S. Yeom, B. Javidi, Y. J. Roh, and H. S. Cho, “Three-dimensional object recognition using x-ray imaging,” Opt. Eng. 43, 027201-1~23 (2005). [CrossRef]

57.

G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).

58.

M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

59.

H. E. Ives, “Optical properties of a Lippmann lenticuled sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]

60.

Okoshi, Three-Dimensional Imaging Techniques, (New York, Academic,1976).

61.

C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]

62.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a there-dimensional image based on Integral Photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]

63.

F. Jin, J. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29, 1345–1347 (2004). [CrossRef] [PubMed]

64.

J. S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29, 1230–1232 (2004). [CrossRef] [PubMed]

65.

R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5237–5242 (2004). [CrossRef] [PubMed]

66.

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]

67.

A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging (CII),” Appl. Opt. 42, 7036–7042 (2003). [CrossRef] [PubMed]

68.

S. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579 – 4588 (2004). [CrossRef] [PubMed]

69.

Y. Frauel, E. Tajahuerce, O. Matoba, A. Castro, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for three-dimensional object recognition,” Appl. Opt. 43, 452–462 (2004). [CrossRef] [PubMed]

70.

A. Stern and B. Javidi, “Three-Dimensional image sensing, visualization, and processing using integral imaging,” Proceedings of the IEEE 94, 591–607(2006). [CrossRef]

OCIS Codes
(100.5010) Image processing : Pattern recognition
(100.6890) Image processing : Three-dimensional image processing
(110.6880) Imaging systems : Three-dimensional image acquisition
(170.3880) Medical optics and biotechnology : Medical and biological imaging

ToC Category:
Imaging Systems

History
Original Manuscript: February 1, 2006
Revised Manuscript: April 10, 2006
Manuscript Accepted: April 10, 2006
Published: May 1, 2006

Virtual Issues
Vol. 1, Iss. 6 Virtual Journal for Biomedical Optics

Citation
Bahram Javidi, Seokwon Yeom, Inkyu Moon, and Mehdi Daneshpanah, "Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events," Opt. Express 14, 3806-3829 (2006)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-14-9-3806


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. The largely forgotten Influenza in 1918, a. k. a. "Spanish Flu" or "La Grippe" killed an estimated 40 million people worldwide, and an estimated 600,000 in the USA. It infected an estimated 20% of the world population. See Alfred Crosby, "America's Forgotten Pandemic: The Influenza of 1918," (Cambridge University Press, Cambridge, 1989).
  2. http://www.pbs.org/wgbh/amex/influenza/
  3. J. W. Lengeler, G. Drews, and H. G. Schlegel, Biology of the prokaryotes, (New York, Blackwell science, 1999).
  4. M. G. Forero, F. Sroubek, and G. Cristobal, "Identification of tuberculosis bacteria based on shape and color," Real-time Imag. 10, 251-262 (2004). [CrossRef]
  5. J. Alvarez-Borrego, R. R. Mourino-Perez, G. Cristobal-Perez, and J. L. Pech-Pacheco, "Invariant recognition of polychromatic images of Vibrio cholerae 01," Opt. Eng. 41, 827-833 (2002). [CrossRef]
  6. A. L. Amaral, M. da Motta, M. N. Pons, H. Vivier, N. Roche, M. Moda, and E. C. Ferreira, "Survey of protozoa and metazoa populations in wastewater treatment plants by image analysis and discriminant analysis," Environmentrics 15, 381-390 (2004). [CrossRef]
  7. S.-K. Treskatis, V. Orgeldinger, H. wolf, and E. D. Gilles, "Morphological characterization of filamentous microorganisms in submerged cultures by on-line digital image analysis and pattern recognition," Biotechnol. Bioeng. 53, 191-201 (1997). [CrossRef] [PubMed]
  8. T. Luo, K. Kramer, D. B. Goldgof, L. O. Hall, S. Samson, A. Remsen, and T. Hopkins, "Recognizing plankton images from the shadow image particle profiling evaluation recorder," IEEE Trans. Syst. Man. Cybern. Part B 34, 1753-1762 (2004). [CrossRef]
  9. A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, "Design and application of quadratic correlation filters for target detection," IEEE Trans. Aerosp. Electron. Syst. 40, 837-850 (2004). [CrossRef]
  10. F. A. Sadjadi, "Infrared target detection with probability density functions of wavelet transform subbands," Appl. Opt. 43, 315-323 (2004). [CrossRef] [PubMed]
  11. A. K. Jain, Fundamentals of digital image processing, (Prentice Hall, 1989).
  12. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification 2nd, (NewYork, Wiley Interscience, 2001).
  13. C. M. Bishop, Neural networks for pattern recognition, (New York, Oxford University Press, 1995).
  14. B. Javidi and P. Refregier, eds., Optical pattern recognition, (SPIE, 1994).
  15. H. Kwon and N. M. Nasrabadi, "Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery," IEEE Trans. Geosci. Remote Sens. 43, 388-397 (2005). [CrossRef]
  16. F. Sadjadi, ed., Milestones in performance evaluations of signal and image processing systems, (SPIE Press, 1993).
  17. P. Refregier, V. Laude, and B. Javidi, "Nonlinear joint transform correlation: an optimum solution for adaptive image discrimination and input noise robustness," J. Opt. Lett. 19, 405-407 (1994).
  18. F. Sadjadi, "Improved target classification using optimum polarimetric SAR signatures," IEEE Trans. Aerosp. Electron. Syst. 38, 38-49 (2002). [CrossRef]
  19. B. Javidi and F. Okano, eds., Three-dimensional television, video, and display technologies, (New York, Springer, 2002).
  20. B. Javidi, ed., Image Recognition and Classification: Algorithms, Systems, and Applications, (New York, Marcel Dekker, 2002). [CrossRef]
  21. B. Javidi and E. Tajahuerce, "Three dimensional object recognition using digital holography," Opt. Lett. 25, 610-612 (2000). [CrossRef]
  22. O. Matoba, T. J. Naughton, Y. Frauel, N. Bertaux, and B. Javidi, "Real-time three-dimensional object reconstruction by use of a phase-encoded digital hologram," Appl. Opt. 41, 6187-6192 (2002). [CrossRef] [PubMed]
  23. Y. Frauel and B. Javidi, "Neural network for three-dimensional object recognition based on digital holography," Opt. Lett. 26, 1478-1480 (2001). [CrossRef]
  24. E. Tajahuerce, O. Matoba, and B. Javidi, "Shift-invariant three-dimensional object recognition by means of digital holography," Appl. Opt. 40, 3877-3886 (2001). [CrossRef]
  25. B. Javidi and D. Kim, "Three-dimensional-object recognition by use of single-exposure on-axis digital holography," Opt. Lett. 30, 236-238 (2005). [CrossRef] [PubMed]
  26. D. Kim and B. Javidi, "Distortion-tolerant 3-D object recognition by using single exposure on-axis digital holography," Opt. Express 12, 5539-5548 (2005). [CrossRef]
  27. S. Yeom and B. Javidi, "Three-dimensional object feature extraction and classification with computational holographic imaging," Appl. Opt. 43, 442-451 (2004). [CrossRef] [PubMed]
  28. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, "Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography," Opt. Express 13, 4492-4506 (2005). [CrossRef] [PubMed]
  29. S. Yeom, I Moon, and B. Javidi, "Real-time 3D sensing, visualization and recognition of dynamic biological micro-organisms," Proceedings of IEEE 94, 550-566 (2006). [CrossRef]
  30. S. Yeom and B. Javidi, "Three-dimensional recognition of microorganisms," J. Bio. Opt. 11, 02401718 (2006). [CrossRef]
  31. S. Yeom, I. Moon, and B. Javidi, "Two approaches of 3D microorganism recognition using single exposure on-line digital holography," in F. Sadjadi and B. Javidi (eds.), Physics of Automatic Target Recognition, (Springer, 2006).
  32. B. Javidi, I. Moon, and S. Yeom, "3D microorganism sensing, visualization and recognition using single exposure on-line digital holography," Optics and Photonics News 17, 16-21 (2006). [CrossRef]
  33. I. Moon and B. Javidi, "Shape-tolerant three-dimensional recognition of microorganisms using digital holography," Opt. Express 13, 9612-9622 (2005). [CrossRef] [PubMed]
  34. S. Kishk and B. Javidi, "Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging," Opt. Express 11, 3528-3541 (2003). [CrossRef] [PubMed]
  35. S. Yeom, B. Javidi, and E. Watson, "Photon counting passive 3D image sensing for automatic target recognition," Opt. Express 13, 9310-9330 (2005). [CrossRef] [PubMed]
  36. T. Kreis, ed., Handbook of Holographic Interferometry, (Wiley, VCH, 2005).
  37. J. W. Goodman, Introduction to Fourier Optics 2nd, (Boston, McGraw Hill, 1996). [PubMed]
  38. J. W. Goodman and R. W. Lawrence, "Digital image holograms," Appl. Phys. Lett. 11, 77-79 (1967). [CrossRef]
  39. I. Yamaguchi and T. Zhang, "Phase-shifting digital holography," Opt. Lett. 22, 1268-1270 (1997). [CrossRef] [PubMed]
  40. P. Ferraro, S. Grilli, D. Alfieri, S. D. Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano, "Extended focused image in microscopy by digital holography," Opt. Express 13, 6738-6749 (2005). [CrossRef] [PubMed]
  41. T. Zhang and I. Yamaguchi, "Three-dimensional microscopy with phase-shifting digital holography," Opt. Lett. 23, 1221-1223 (1998). [CrossRef]
  42. B. R. Brown and A. W. Lohmann, "Complex spatial filtering with binary masks," Appl. Opt. 5, 967-969 (1966). [CrossRef] [PubMed]
  43. M. DaneshPanah and B. Javidi "Segmentation of 3D holographic images using bivariate jointly distributed region snake," Opt. Express (submitted).
  44. O. Germain and P. Refregier "Optimal snake-based segmentation of a random luminance target on a spatially disjoint background," Opt. Lett. 21, 1845 (1996). [CrossRef] [PubMed]
  45. C. Chesnaud, V. Page, and P. Refregier, "Improvement in robustness of the statistically independent region snake-based segmentation method of target-shape tracking," Opt. Lett. 23, 488-490 (1998). [CrossRef]
  46. M. Kass, A. Witkin, and D. Terzopoulus, "Snakes: Active contour models," Int. J. Comput. Vision 1, 321-331 (1987). [CrossRef]
  47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker, 2000).
  48. B. Javidi and J. Wang, "Limitations of the classic definition of the signal-to-noise ratio in matched filter based optical pattern recognition," Appl. Opt. 31, 6826-6829 (1992). [CrossRef] [PubMed]
  49. B. Javidi and J. Wang, "Optimum distortion invariant filters for detecting a noisy distorted target in background noise," J. Opt. Soc. Am. 12, 2604-2614 (1995). [CrossRef]
  50. J. G. Daugman, "Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters," J. Opt. Soc. Am. 2, 1160-1169 (1985). [CrossRef]
  51. T. S. Lee, "Image representation using 2D Gabor wavelets," IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959-971 (1996). [CrossRef]
  52. J. G. Daugman, "How iris recognition works," IEEE Trans. Circuits Syst. for Video.Tech. 14, 21-30, (2004). [CrossRef]
  53. M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, "Distortion invariant object recognition in the dynamic link architecture," IEEE Trans. Comput. 42, 300-311 (1993). [CrossRef]
  54. R. P. Wurtz, "Object recognition robust under translations, deformations, and changes in background," IEEE Trans. Pattern. Anal. Mach. Intell. 19, 769-775 (1997). [CrossRef]
  55. B. Duc, S. Fischer, and J. Bigun, "Face authentification with Gabor information on deformable graphs," IEEE Trans. Image Process. 8, 504-516 (1999). [CrossRef]
  56. S. Yeom, B. Javidi, Y. J. Roh, and H. S. Cho, "Three-dimensional object recognition using x-ray imaging," Opt. Eng. 43, 027201-1~23 (2005). [CrossRef]
  57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press, 1989).
  58. M. G. Lippmann, "Epreuves reversibles donnant la sensation du relief," J. Phys. 7, 821-825 (1908).
  59. H. E. Ives, "Optical properties of a Lippmann lenticuled sheet," J. Opt. Soc. Am. 21, 171-176 (1931). [CrossRef]
  60. Okoshi, Three-Dimensional Imaging Techniques, (New York, Academic, 1976).
  61. C. B. Burckhardt, "Optimum parameters and resolution limitation of integral photography," J. Opt. Soc. Am. 58, 71-76 (1968). [CrossRef]
  62. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, "Real-time pickup method for a there-dimensional image based on Integral Photography," Appl. Opt.  36, 1598-1603 (1997). [CrossRef] [PubMed]
  63. F. Jin, J. Jang, and B. Javidi, "Effects of device resolution on three-dimensional integral imaging," Opt. Lett. 29, 1345-1347 (2004). [CrossRef] [PubMed]
  64. J. S. Jang and B. Javidi, "Three-dimensional integral imaging of micro-objects," Opt. Lett. 29, 1230-1232 (2004). [CrossRef] [PubMed]
  65. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, "Enhanced depth of field integral imaging with sensor resolution constraints," Opt. Express 12, 5237-5242 (2004). [CrossRef] [PubMed]
  66. H. Arimoto and B. Javidi, "Integral three-dimensional imaging with digital reconstruction," Opt. Lett. 26, 157-159 (2001). [CrossRef]
  67. A. Stern and B. Javidi, "3D image sensing and reconstruction with time-division multiplexed computational integral imaging (CII)," Appl. Opt. 42, 7036-7042 (2003). [CrossRef] [PubMed]
  68. S. Hong and B. Javidi, "Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing," Opt. Express 12, 4579 - 4588 (2004). [CrossRef] [PubMed]
  69. Y. Frauel, E. Tajahuerce, O. Matoba, A. Castro, and B. Javidi, "Comparison of passive ranging integral imaging and active imaging digital holography for three-dimensional object recognition", Appl. Opt. 43, 452-462 (2004). [CrossRef] [PubMed]
  70. A. Stern and B. Javidi, "Three-Dimensional image sensing, visualization, and processing using integral imaging," Proceedings of the IEEE 94, 591- 607 (2006). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited