OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editor: Gregory W. Faris
  • Vol. 3, Iss. 11 — Oct. 22, 2008
« Show journal navigation

Automated manipulation of non-spherical micro-objects using optical tweezers combined with image processing techniques

Yoshio Tanaka, Hiroyuki Kawada, Ken Hirano, Mitsuru Ishikawa, and Hiroyuki Kitajima  »View Author Affiliations


Optics Express, Vol. 16, Issue 19, pp. 15115-15122 (2008)
http://dx.doi.org/10.1364/OE.16.015115


View Full Text Article

Acrobat PDF (757 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Automated optical trapping of non-spherical objects offers great flexibility as a non-contact micromanipulation tool in various research fields. Computer vision control enables fruitful applications of automated manipulation in biology and material science. Here we demonstrate fully-automated, simultaneous, independent trapping and manipulation of multiple non-spherical objects using multiple-force optical clamps. Customized real-time feature recognition and trapping beam control algorithms are also presented.

© 2008 Optical Society of America

1. Introduction

Here we demonstrate fully-automated, simultaneous, independent trapping and manipulation of multiple fluid-borne inhomogeneous objects based on real-time feature recognition and multiple-force optical clamps, where the operator is fully removed from control sequences. We chose two kinds of sample with nontrivial shape; ellipse-like diatoms and rod-like whiskers, from the viewpoint of observed size and trapping forces applied to the sample. We also outline the hardware setup and the control sequences for our demonstrations.

2. Experimental setup

2.1 Optical system and control system

A laser scanning method is suitable for real-time trajectory control of trapped objects based on results of feature recognition, since it is simple to generate and change multiple optical trapping positions rapidly, wasting less computing-time. The superior feature of the computing-time is derived from the directly writing of trapping positions rather than the complex calculation of a holographic filter for a spatial light modulator (SLM). The laser scanning method with high-reflective mirrors also enables the use of more powerful irradiation than do holographic or GPC methods with a spatial light modulator. Hence, we chose a Time-Sharing Synchronized Scanning (T3S) approach [6

6. F. Arai, K. Yoshikawa, T. Sakami, and T. Fukuda, “Synchronized laser micromanipulation of multiple targets along each trajectory by single laser,” Appl. Phys. Lett. 85, 4301–4303 (2004). [CrossRef]

] for the physical method of applying multiple optical clamps. Our experimental setup is illustrated in Fig. 1(a). An expanded continuous-wave Nd:YAG laser beam (Spectron SL902T, λ = 1064 nm, TEM00, 16W(max)) is introduced into an inverted microscope (Olympus IX70) via a shutter, lenses L1, L2, a PC-controlled (Newport FMS-300), a relay lens and the fluorescence port, and is reflected upward by a dichroic mirror to an oil-immersion objective. The 3D focal position of the beam is controlled; that on the XY-plane by the 2-axis steering mirror, which can tilt at a rate similar to a piezoelectric mirror [14

14. C. Mio and D. W. M. Marr, “Optical trapping for the manipulation of colloidal particles,” Adv. Mater. 12, 917–920 (2000). [CrossRef]

] (that is, its closed-loop amplitude bandwidth is over 1kHz), and that of the Z-axis by the lens L1 mounted on a PC-controlled linear stage which can be moved parallel (the maximum speed is 800 mm/s) to the optical axis. No automated control for the Z-axis is installed, because the linear stage cannot move quickly in synchronism with the time-shared scan on the XY-plane. Hence, the focal control with respect to the Z-axis is only commanded by PC mouse to raise samples into the specified Z-coordinate. Samples are illuminated by a standard equipped microscope’s halogen light source. An image processor (Hitachi IP5005) digitizes the images from a color CCD camera (SONY DXC-151A) in real-time. The developed software, which is programmed in C++ (Microsoft Visual C++6.0) for image processing and device control, is executed by PC (Intel Core2 Duo CPU).

Fig. 1. Schematic diagram of (a): the time-sharing synchronized scanning optical tweezers, (b): the control sequences, for automated multiple clamps and manipulation.

2.2 Control sequences

Figure 1(b) illustrates the outline of control sequences for applying automated clamps and manipulation of multiple non-spherical objects. Our approach consists of four processes; a contour shape detection (process 1), model matching (process 2), automated multiple clamping (process 3), and automated manipulation (process 4). Each process is in C++ software, is dependent on the preceding process, and is modified to perform the specified demonstrations. First, in process 1, the contours of objects are extracted using a digital filter for finding the local edges, for example a Sobel operator [15

15. D. H. Ballard and C. M. Brown, Computer Vision (Prentice-Hall, 1982), Chap. 3–4.

], and a subsequent noise reduction algorithm which removes isolated one-pixel elements from binary images.

tD=6πηw3kBT,
(1)

where η, w, kB, T are fluid viscosity, average radius of the objects, Boltzmann constant, and absolute temperature, respectively. For typical samples ranging in size from w=1 to 3 µm, room temperature (T=293 K) and viscosity of water (η=0.001 Pa s), the allowable computing time, tD, is from 5 to 126 s.

Thirdly, in process 3, the detected non-spherical objects are automatically clamped at pre-determined points on the modeled shape using the T3S optical tweezers. Finally, in process 4, all clamped objects are automatically translated/rotated from initial positions/orientations to the destinations which are automatically allocated by taking open spaces and the identified parameters into account. Once the collision-free paths are generated based on the pre-designed manipulation/sorting planning, the simultaneous manipulations of the multiple objects are preformed under open-loop control strategies.

3. Demonstrations

3.1 Samples

For demonstrations, we chose two kinds of sample with non-spherical and nontrivial shape. One is an ellipse-like diatom, which has cell walls of silica consisting of two interlocking symmetrical valves [16

16. Y. A. Hicks, D. Marshall, P. L. Rosin, R. R. Martin, D. G. Mann, and S. J. M. Droop, “A model of diatom shape and texture for analysis, synthesis and identification,” Mach. Vision Appl. 17, 297–307 (2006). [CrossRef]

]. The diatoms were collected from a creek and cleaned in an acid solution to remove organic matters in the sample. We selected diatoms roughly 20 µm long and 4 µm wide from the viewpoints of trapping power and visual detection accuracy of their postures using image processing techniques. The other sample consisted of aluminum borate whiskers. The whiskers are a refractory compound forming needle-shaped crystals or rod-like particles, and are a suitable material for the reinforcement of plastics or metal alloys [17

17. H. Wada, K. Sakane, T. Kitamura, H. Hata, and H. Kambara, “Synthesis of aluminium borate whiskers in potassium sulphate flux,” J. Mater. Sci. Lett. 10, 1076–1077 (1991). [CrossRef]

]. We used rod-like particles ranging in size from 10 µm to 15 µm long and roughly 1 µm in width for the purpose of the same viewpoints mentioned above. In our demonstrations, these different shaped samples were dispersed in deionized water.

3.2 Automated trapping and 3D manipulation of diatoms

In Fig. 2(Right: Media 1), we show a case of three-point-clamping. First, three diatoms were automatically detected, and stably clamped simultaneously at edge points C3 on each diatom (Fig. 2(a)). For the above-mentioned processes 1 and 2, it took roughly 4 seconds to complete the processes and to identify the control parameters. In process 3, we could automatically attract all the diatoms into the identified trapping positions (C3) and stably clamp the diatoms, since 4 seconds was ample time as compared with the time tD=37 s, where we used half of the minor axis of the elliptic model (w=2 µm), room temperature (T=293 K) and viscosity of water (η=0.001 Pa s) for the calculation of tD in Eq. 1. Next, all the clamped diatoms were rotated from their detected initial orientations to the desired orientations, in which the major axis of each diatom is perpendicular to the translation direction for stable dragging (Fig. 2(Left) and (b)). Finally, subsequent translation (Fig. 2(b)) and rotation (Fig. 2(c)) without collision were able to arrange all the diatoms automatically and simultaneously in the same orientation at the pre-determined destinations (Fig. 2(d)), while each diatom retained its initial posture giving a repeatable 2D view in microscope images. Note that this initial 2D view of each diatom is almost always observed in the scenes, since the diatoms lay on a cover glass due to their valves being sufficiently flat.

Fig. 2. Left: Elliptic model and control parameters for detecting and manipulating diatoms. Right: (Media 1) Automated multiple clamps and simultaneous manipulation of multiple diatoms. Each diatom is automatically clamped at three edge points C3, and is rotated/translated to be arranged in the same final orientation.

Thus, the multiple-force optical clamps with computer vision enabled the automatic manipulation of multiple diatoms along a collision-free dynamic path, with each diatom retaining its own unique posture corresponding to both the position and the number of clamps.

Fig. 3. (Media 2) Automated multiple clamps and simultaneous manipulation of multiple diatoms. Each diatom is automatically clamped at two edge points C2 in Fig. 2. In this case, (a): shortly after irradiation of clamp beams, the diatoms autonomously turn 90 degrees about the major axis of elliptic model in Fig. 2; (f): shortly after release of the clamps, the diatoms return to their flat posture because of gravity, which gives a repeatable 2D view in microscope images.

3.3 Automated trapping and sorting of whiskers

Here, we demonstrate that multiple whiskers can be automatically sorted by length using the two-point-clamp, with each whisker trapped at both tip positions C2 in Fig. 4(Left). In order to recognize all whiskers in the field of view, we apply the Hough transform to a skeleton model in Fig. 4(Left), which consists of four parameters, x, y, θ, l; that is, 2D positions, orientation, and length of whisker, respectively. For the demonstration in this section (Media 3), we used a ×100 oil-immersion objective (Olympus UPlanApo, NA1.35, IR). We also adjusted the beam power to roughly 350 mW at the entrance pupil of the objective, and set the shared irradiation time, as in demonstrations in the former section, to 20 ms for each clamp position. First, three whiskers were automatically detected, and stably clamped simultaneously at both tip positions of each whisker (Fig. 4(a)). Secondly, the clamped whiskers were simultaneously translated to the left-side of the field, while keeping the initial orientation of each whisker (Fig. 4(b)). After rotating from the initial orientation to horizontal, thirdly, the whiskers were sequentially translated from the left-side to the right-side of the field in order to sort by their lengths (Fig. 4(c)). The paths and order of translation are represented by the numbered black arrows in Fig. 4(b). Finally, the whiskers with horizontal orientation were simultaneously translated from the right-side to the center of the field, and arranged vertically in order of their length, as shown in Fig. 4(d). Note that these dexterous movements for sorting without colliding with each other, represented by the black arrows, could be automatically generated in process 4 based on the recognized parameters of the skeleton model.

Fig. 4. Left: skeleton model and control parameters for detecting and manipulating whiskers. Right: (Media 3) Automated clamps and simultaneous dexterous manipulation of multiple whiskers for sorting by length. The whiskers are automatically clamped at both tip positions, C2, of each skeleton, and are translated/rotated to be arranged automatically according to their length measured by image processing.

3.4 Discussion

We applied the Hough technique to detecting the ellipse/rod-like samples which have the five/four model parameters to be identified. The Hough transform is the most common algorithm that is known to be robust under noisy conditions. In general, it is possible to detect the objects appeared in the 2D image with any arbitrary shape that can be specified by parameters, and its detecting ability is not limited by their orientations and scales. Thus, the Hough transform has favorable properties for automating the optical trap and subsequent manipulation based on visual information, although its main limitation is that it slows with increasing numbers of parameters. Our system, of course, cannot stably clamp the smaller 3D objects which will change their 2D view before completion of the Hough process.

We used not a graphical programming environment such as LabVIEW but a programming language C++, since a tight integration between the control of the T3S optical tweezers for multiple clamps and the image processing for Hough transform was needed for the real time control. We also restricted the resolution of orientation to be a maximum of 1 degree in the parameter space in order to refine the processing speed. The time required for initial clamps is mainly limited by the Hough process and was 4 seconds. For automating the subsequent manipulation process (that is, for above-mentioned process 4), this speed of the Hough process is still not enough to process every video frame repeatedly, and is hard to include in a real-time feedback process, even with the latest PC. Thus, the simultaneous manipulations of the multiple diatoms/whiskers along the collision-free paths were demonstrated under open-loop control based on the pre-designed manipulation/sorting planning. The real power of this approach will become more apparent when the system is installed the live vision feedback.

4. Conclusion

We have demonstrated the feasibility of fully-automated trapping and manipulation of non-spherical micron-sized objects using the T3S optical tweezers combined with computer vision techniques. Ellipse- and rod-like objects suspended in water were automatically recognized, trapped, and manipulated. To our knowledge, this is the first demonstration of fully-automated, simultaneous trapping and manipulation of multiple, non-spherical objects using a multiple-force optical clamps technique. Although we have dealt with only two kinds of shaped objects, these demonstrations, we believe, will open up new possibilities for manipulating arbitrarily shaped objects using multiple clamps based on computer vision technique. The automated clamps of biological materials will enable exciting applications in cell biology such as the non-contact mechanotransduction in live cells [20]. Furthermore, it is expected that the control schemes coupled with high-speed vision feedback, in which the feedback rate is less than the conventional video rate (33 ms), may enable fruitful applications for the dynamic 3D motion control of arbitrarily shaped objects such as microstructures in MEMS and Lab-on-a-Chip.

Acknowledgments

We would like to thank Mr. Hideo Wada of AIST Shikoku for preparation of the whiskers. This work was partly supported by Grants-in-Aid for Scientific Research (C, #20560252) from the Japan Society for the Promotion of Science.

References and Links

1.

A. Ashkin, “Acceleration and trapping of particles by radiation pressure,” Phys. Rev. Lett. 24, 156–159 (1970). [CrossRef]

2.

D. G. Grier, “A revolution in optical manipulation,” Nature 424, 810–816 (2003). [CrossRef] [PubMed]

3.

K. Sasaki, M. Koshioka, H. Misawa, N. Kitamura, and H. Masuhara, “Pattern-formation and flow-control of fine particles by laser-scanning micromanipulation,” Opt. Lett. 16, 1463–1465 (1991). [CrossRef] [PubMed]

4.

J. E. Curtis, B. A. Koss, and D. G. Grier, “Dynamic holographic optical tweezers,” Opt. Commun. 207, 169–175 (2002). [CrossRef]

5.

P. J. Rodrigo, R. L. Eriksen, V. R. Daria, and J. Glückstad, “Interactive light-driven and parallel manipulation of inhomogeneous particles,” Opt. Express 10, 1550–1556 (2002). [PubMed]

6.

F. Arai, K. Yoshikawa, T. Sakami, and T. Fukuda, “Synchronized laser micromanipulation of multiple targets along each trajectory by single laser,” Appl. Phys. Lett. 85, 4301–4303 (2004). [CrossRef]

7.

P. J. Rodrigo, L. Gammelgaard, P. Bøggild, I. R. Perch-Nielsen, and J. Glückstad, “Actuation of microfabricated tools using multiple GPC-based counterpropagating-beam traps,” Opt. Express 13, 6899–6904 (2005). [CrossRef] [PubMed]

8.

J. T. Finer, R. M. Simmons, and J. A. Spudich, “Single myosin molecule mechanics: piconewton forces and nanometer steps,” Nature 368, 113–119 (1994). [CrossRef] [PubMed]

9.

P. J. H. Bronkhorst, G. J. Streekstra, J. Grimbergen, E. J. Nijhof, J. J. Sixma, and G. J. Brakenhoff, “A new method to study shape recovery of red blood cell using multiple optical trapping,” Biophys. J. 69, 1666–1673 (1995). [CrossRef] [PubMed]

10.

Y. Tanaka, K. Hirano, H. Nagata, and M. Ishikawa, “Real-time three-dimensional orientation control of non-spherical micro-objects using laser trapping,” Electron. Lett. 43, 412–414 (2007). [CrossRef]

11.

S. C. Chapin, V. Germain, and E. R. Dufresne, “Automated trapping, assembly, and sorting with holographic optical tweezers,” Opt. Express 14, 13095–13100 (2006). [CrossRef] [PubMed]

12.

P. J. Perch-Nielsen, C. A. Rodrigo, J. Alonzo, and Glückstad, “Autonomous and 3D real-time multi-beam manipulation in a microfluidic environment,” Opt. Express 14, 12199–12205 (2006). [CrossRef] [PubMed]

13.

P. J. Rodrigo, L. Kelemen, C. A. Alonzo, I. R. Perch-Nielsen, J. S. Dam, P. Ormos, and J. Glückstad, “2D optical manipulation and assembly of shape-complementary planar microstructures,” Opt. Express 15, 9009–9014 (2007). [CrossRef] [PubMed]

14.

C. Mio and D. W. M. Marr, “Optical trapping for the manipulation of colloidal particles,” Adv. Mater. 12, 917–920 (2000). [CrossRef]

15.

D. H. Ballard and C. M. Brown, Computer Vision (Prentice-Hall, 1982), Chap. 3–4.

16.

Y. A. Hicks, D. Marshall, P. L. Rosin, R. R. Martin, D. G. Mann, and S. J. M. Droop, “A model of diatom shape and texture for analysis, synthesis and identification,” Mach. Vision Appl. 17, 297–307 (2006). [CrossRef]

17.

H. Wada, K. Sakane, T. Kitamura, H. Hata, and H. Kambara, “Synthesis of aluminium borate whiskers in potassium sulphate flux,” J. Mater. Sci. Lett. 10, 1076–1077 (1991). [CrossRef]

18.

Y. Tanaka, A. Murakami, K. Hirano, H. Nagata, and M. Ishikawa, “Development of PC-controlled laser manipulation system with image processing functions,” Proc. SPIE. 6374, 63740P1-P8 (2006).

19.

R. Agarwal, K. Ladavac, Y. Roichman, G. Yu, C. M. Lieber, and D. G. Grier, “Manipulation and assembly of nanowires with holographic optical traps,” Opt. Express 13, 8906–8912 (2005). [CrossRef] [PubMed]

20.

X. Trepat, L. Deng, S. S. An, D. Navajas, D. J. Tschumperlin, W. T. Gerthoffer, J. P. Butler, and J. Fredberg, “Universal physical responses to stretch in the living cell,” Nature 447, 592–596 (2007).1 [CrossRef] [PubMed]

OCIS Codes
(140.7010) Lasers and laser optics : Laser trapping
(150.0150) Machine vision : Machine vision
(170.4520) Medical optics and biotechnology : Optical confinement and manipulation

ToC Category:
Optical Trapping and Manipulation

History
Original Manuscript: July 24, 2008
Revised Manuscript: September 8, 2008
Manuscript Accepted: September 8, 2008
Published: September 10, 2008

Virtual Issues
Vol. 3, Iss. 11 Virtual Journal for Biomedical Optics

Citation
Yoshio Tanaka, Hiroyuki Kawada, Ken Hirano, Mitsuru Ishikawa, and Hiroyuki Kitajima, "Automated manipulation of non-spherical micro-objects using optical tweezers combined with image processing techniques," Opt. Express 16, 15115-15122 (2008)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-16-19-15115


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. A. Ashkin, "Acceleration and trapping of particles by radiation pressure," Phys. Rev. Lett. 24, 156-159 (1970). [CrossRef]
  2. D. G. Grier, "A revolution in optical manipulation," Nature 424, 810-816 (2003). [CrossRef] [PubMed]
  3. K. Sasaki, M. Koshioka, H. Misawa, N. Kitamura, and H. Masuhara, "Pattern-formation and flow-control of fine particles by laser-scanning micromanipulation," Opt. Lett. 16, 1463-1465 (1991). [CrossRef] [PubMed]
  4. J. E. Curtis, B. A. Koss, and D. G. Grier, "Dynamic holographic optical tweezers," Opt. Commun. 207, 169-175 (2002). [CrossRef]
  5. P. J. Rodrigo, R. L. Eriksen, V. R. Daria, and J. Glückstad, "Interactive light-driven and parallel manipulation of inhomogeneous particles," Opt. Express 10, 1550-1556 (2002). [PubMed]
  6. F. Arai, K. Yoshikawa, T. Sakami, and T. Fukuda, "Synchronized laser micromanipulation of multiple targets along each trajectory by single laser," Appl. Phys. Lett. 85, 4301-4303 (2004). [CrossRef]
  7. P. J. Rodrigo, L. Gammelgaard, P. Bøggild, I. R. Perch-Nielsen, and J. Glückstad, "Actuation of microfabricated tools using multiple GPC-based counterpropagating-beam traps," Opt. Express 13, 6899-6904 (2005). [CrossRef] [PubMed]
  8. J. T. Finer, R. M. Simmons, and J. A. Spudich, "Single myosin molecule mechanics: piconewton forces and nanometer steps," Nature 368, 113-119 (1994). [CrossRef] [PubMed]
  9. P. J. H. Bronkhorst, G. J. Streekstra, J. Grimbergen, E. J. Nijhof, J. J. Sixma, and G. J. Brakenhoff, "A new method to study shape recovery of red blood cell using multiple optical trapping," Biophys. J. 69, 1666-1673 (1995). [CrossRef] [PubMed]
  10. Y. Tanaka, K. Hirano, H. Nagata, and M. Ishikawa, "Real-time three-dimensional orientation control of non-spherical micro-objects using laser trapping," Electron. Lett. 43, 412-414 (2007). [CrossRef]
  11. S. C. Chapin, V. Germain, and E. R. Dufresne, "Automated trapping, assembly, and sorting with holographic optical tweezers," Opt. Express 14, 13095-13100 (2006). [CrossRef] [PubMed]
  12. I. R. Perch-Nielsen, P. J. Rodrigo, C. A. Alonzo, and J. Glückstad, "Autonomous and 3D real-time multi-beam manipulation in a microfluidic environment," Opt. Express 14, 12199-12205 (2006). [CrossRef] [PubMed]
  13. P. J. Rodrigo, L. Kelemen, C. A. Alonzo, I. R. Perch-Nielsen, J. S. Dam, P. Ormos, and J. Glückstad, "2D optical manipulation and assembly of shape-complementary planar microstructures," Opt. Express 15, 9009-9014 (2007). [CrossRef] [PubMed]
  14. C. Mio and D. W. M. Marr, "Optical trapping for the manipulation of colloidal particles," Adv. Mater. 12, 917-920 (2000). [CrossRef]
  15. D. H. Ballard and C. M. Brown, Computer Vision (Prentice-Hall, 1982), Chap. 3-4.
  16. Y. A. Hicks, D. Marshall, P. L. Rosin, R. R. Martin, D. G. Mann, and S. J. M. Droop, "A model of diatom shape and texture for analysis, synthesis and identification," Mach. Vision Appl. 17, 297-307 (2006). [CrossRef]
  17. H. Wada, K. Sakane, T. Kitamura, H. Hata, and H. Kambara, "Synthesis of aluminium borate whiskers in potassium sulphate flux," J. Mater. Sci. Lett. 10, 1076-1077 (1991). [CrossRef]
  18. Y. Tanaka, A. Murakami, K. Hirano, H. Nagata, and M. Ishikawa, "Development of PC-controlled laser manipulation system with image processing functions," Proc. SPIE. 6374, 63740P1-P8 (2006).
  19. R. Agarwal, K. Ladavac, Y. Roichman, G. Yu, C. M. Lieber, and D. G. Grier, "Manipulation and assembly of nanowires with holographic optical traps," Opt. Express 13, 8906-8912 (2005). [CrossRef] [PubMed]
  20. X. Trepat, L. Deng, S. S. An, D. Navajas, D. J. Tschumperlin, W. T. Gerthoffer, J. P. Butler, and J. Fredberg, "Universal physical responses to stretch in the living cell," Nature 447, 592-596 (2007).</> [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1. Fig. 2. Fig. 3.
 
Fig. 4.
 

Supplementary Material


» Media 1: MOV (1023 KB)     
» Media 2: MOV (2103 KB)     
» Media 3: MOV (1581 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited