OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Editor: Joseph N. Mait
  • Vol. 53, Iss. 14 — May. 10, 2014
  • pp: 3101–3109
« Show journal navigation

Three-dimensional inline inspection for substrate warpage and ball grid array coplanarity using stereo vision

Takeshi Nakazawa and Ayman Samara  »View Author Affiliations


Applied Optics, Vol. 53, Issue 14, pp. 3101-3109 (2014)
http://dx.doi.org/10.1364/AO.53.003101


View Full Text Article

Acrobat PDF (1592 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We present a method for full-field 3D measurement of substrate warpage and ball grid array coplanarity, which is suitable for inline back-end inspection and process monitoring. For evaluating the performance of the proposed system, the linearity between our system and a reference confocal microscope is studied by repeating measurements 35 times with a particular substrate sample ( 38 mm × 28.5 mm ). The point-to-point correlation coefficient with 1 σ between two methods is 0.968 ± 0.002 , and the 2 σ difference is 25.15 ± 0.20 μm for warpage measurement. 1 σ repeatability of the substrate warpage is 4.2 μm. For BGA coplanarity inspection the bump level correlation coefficient is 0.957 ± 0.001 and the 2 σ difference is 28.79 ± 0.14 μm . 1 σ repeatability of BGA coplanarity is 3.7 μm. Data acquisition takes about 0.2 s for full field measurements.

© 2014 Optical Society of America

1. Introduction

In the semiconductor industry, electronic packaging plays an essential role for improving the performance of electronic devices. The goal for the production of a high-performance electronic system is packaging devices as densely as possible in order to minimize circuit path length [1

1. W. D. Brown, Electronic Packaging (IEEE, 2006).

]. For achieving this goal, the trend in integrated circuit (IC) packaging is to increase the input/output (I/O) count and to decrease the size of packaging [2

2. W. J. Greig, Integrated Circuit Packaging, Assembly and Interconnections (Springer, 2007).

]. The ball grid array (BGA) is the most common packaging technique used in industry because of its high I/O density and shorter electrical paths. Due to high-density packaging, however, process controls for assembly become critical for reducing problems such as connection failures between BGA and a circuit board. Thus it is important to measure IC package surface profile for decreasing device failure.

Two important quality metrics for package inspection are the substrate warpage and the BGA coplanarity. Figure 1 shows the schematics of an IC package. Due to thermal cycling during manufacturing process and materials with different expansion rates, a substrate is warped. In order to calculate the BGA coplanarity, the z coordinates of each ball are required, and a regression plane is defined based on these z locations. Coplanarity is defined as the distance between the maximum z and the minimum z from the best-fit plane. The BGA coplanarity directly affects solder joint reliability, and the causes of large coplanarity are substrate warpage and ball height differences. The substrate warpage is typically the major contributor to any lack of coplanarity since the solder ball heights are relatively uniform [3

3. Texas Instruments, “Flip chip ball grid array package reference guide” (2005), http://www.ti.com/lit/ug/spru811a/spru811a.pdf.

]. Therefore, the substrate warpage is one of the key metrics for the quality control of IC packages.

Fig. 1. Schematics of an IC package (concave warpage).

Optical-based profilers have been used as nondestructive measurements for a long time. Common optical inspection tools used in IC package characterization are confocal microscopes, white light interferometers (WLI), laser devices [4

4. H. Tsukahara, Y. Nishiyama, F. Takahashi, and T. Fuse, “High-speed solder bump inspection system using a laser scanner and CCD Camera,” Systems and Computers in Japan 31, 94–102 (2000). [CrossRef]

,5

5. P. Kim and S. Rhee, “Three-dimensional inspection of ball grid array using laser vision system,” IEEE Trans. Electron. Packag. Manufact. 22, 151–155 (1999). [CrossRef]

], fringe projection devices [6

6. H. N. Yen and D. M. Tsai, “A fast full-field 3D measurement system for BGA coplanarity inspection,” Int. J. Adv. Manuf. Technol. 24, 132–139 (2004). [CrossRef]

], and machine vision techniques [7

7. V. Bartulovic, M. Lucic, and G. Zacek, “Inspection of ball grid arrays (BGA) by using shadow images of the solder balls,” U.S. Patent6,177,682 B1 (23January2001).

]. Depending on the purpose of measurements, an appropriate metrology should be employed in order to maximize output performances. For example, confocal microscopes or WLI are widely used in laboratories to characterize sampled IC packages because measurement accuracy is more important than throughput. On the other hand, factories use a machine vision system for large volume inspection due to its high throughput and cost advantage. For a quality-control perspective, the high-speed inspection systems used in factories play a key role to monitor production yield. Thus we are focusing on inline inspection system development used in the factories, rather than those used in the laboratories, to meet the demand for measuring high density BGA packages.

Stereo vision is used to reconstruct a 3D object by finding matching pixels (point correspondences) between images captured by two cameras from different view angles and converting these 2D pixel coordinates into the 3D depth. In computer vision, the point correspondence algorithm has been one of the most widely studied subjects [8

8. D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science 194, 283–287 (1976). [CrossRef]

10

10. M. Z. Brown, D. Burschka, and G. D. Hager, “Advances in computational stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 993–1008 (2003). [CrossRef]

]. For accurate reconstructions, transformation relationships between a camera lens and an image plane as well as between a camera and a scene should be determined. This process is called camera calibration. Tsai [11

11. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3, 323–344 (1987). [CrossRef]

] and Zhang [12

12. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proc. 7th Int. Conference on Computer Vision (IEEE, 1999), pp. 666–673.

,13

13. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [CrossRef]

] have developed the most commonly used calibration methods in computer vision. Although there are a number of applications for the 3D measurements [14

14. P. Luo, Y. Chao, and M. Sutton, “Application of stereo vision to three-dimensional deformation analyses in fracture experiments,” Opt. Eng. 33, 981–990 (1994). [CrossRef]

20

20. Z.-Z. Tang, J. Liang, Z. Xial, C. Guo, and G. Hu, “Three-dimensional digital image correlation system for deformation measurement in experimental mechanics,” Opt. Eng. 49, 103601 (2010). [CrossRef]

], the studies of the BGA coplanarity, substrate warpage, and bump height measurements using stereo vision are limited [21

21. C. J. Tay, X. He, X. Kang, C. Quan, and H. M. Shang, “Coplanarity study on ball grid array packaging,” Opt. Eng. 40, 1608–1612 (2001). [CrossRef]

,22

22. M. Dong, R. Chung, E. Y. Lam, and K. S. M. Fung, “Height inspection of wafer bumps without explicit 3-D reconstruction,” IEEE Trans. Electron. Packag. Manufact. 33, 112–121 (2010). [CrossRef]

].

In this paper, we propose the inline stereo vision system for BGA coplanarity and substrate warpage inspection. In Section 2, theoretical aspect of stereo vision is discussed. In Section 3, we describe hardware setup and calibration procedure as well as the computer simulation and experimental results for the substrate warpage and the BGA coplanary. Finally the conclusion is given in Section 4.

2. Theory

Figure 2 shows the epipolar geometry [23

23. C. Steger, Handbook of Machine Vision (Wiley-VCG, 2006).

]. Stereo vision employs two cameras viewing an object from different angles. The world coordinates are given by Xw, Yw, and Zw. The camera coordinates are given by x1, y1 and x2, y2 for a camera 1 and a camera 2, respectively. The points C1 and C2 are the camera center of each camera. The object point A on the world coordinate is imaged to a1 for the camera 1, a2 for the camera 2. The points C1, C2, and A construct the plane called the epipolar plane. The line connecting the C1 and C2 is called the base line, and its intersection points with each image plane are called epipoles e1 and e2. The epipolar plane intersects the image planes, whose intersections are called the epipolar line l1 and l2.

Fig. 2. Epipolar geometry.

We can write the relationship between a1, a2, and A as follows.
a1=P1A,
(1)
a2=P2A,
(2)
where P is known as the 3×4 homogeneous camera projection matrix, which maps a point on the world coordinate to a corresponding point on the camera coordinate.

Given known point correspondences a and A, the matrix P can be reconstructed by using direct linear transformation (DLT) [24

24. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proc. Computer Vis. Patt. Recog.1106–1112 (1997).

] as,
[X1Y1Z110000x1X1x1Y1x1Z1x10000X1Y1Z11y1X1y1Y1y1Z1y1XiYiZi10000xiXixiYixiZixi0000XiYiZi1yiXiyiYiyiZiyi](P11P12P13P14P21P22P23P24P31P32P33P34)=0
(3)
or simply, Kp=0. This can be solved by singular value decomposition (SVD):
K=USVT.
(4)
Then p is the last column of V [25

25. K. F. Riley, M. P. Hobson, and S. J. Bence, “Matrices and vector spaces,” in Mathematical Methods for Physics and Engineering (Cambridge University, 2002).

]. Before applying SVD, it is important to perform appropriate normalization to obtain meaningful results [26

26. R. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580–593 (1997). [CrossRef]

].

Once the system parameters are determined, object heights can be reconstructed from these P matrices and a set of corresponding points a1 and a2 at each image plane. The simplest approach for height reconstruction is linear triangulation [27

27. R. Hartley, “Triangulation,” Comput. Vis. Image Underst. 68, 146–157 (1997). [CrossRef]

]. For each camera, we have a1=P1A and a2=P2A, which can also be expressed as a1×(P1A)=0 and a2×(P2A)=0. These equations can be combined as,
[x1P31(1)P11(1)x1P32(1)P12(1)x1P33(1)P13(1)x1P34(1)P14(1)y1P31(1)P21(1)y1P32(1)P22(1)y1P33(1)P23(1)y1P34(1)P24(1)x1P21(1)yP11(1)x1P22(1)y1P12(1)x1P23(1)y1P13(1)x1P24(1)y1P14(1)x2P31(2)P11(2)x2P32(2)P12(2)x2P33(2)P13(2)x2P34(2)P14(2)y2P31(2)P21(2)y2P32(2)P22(2)y2P33(2)P23(2)y2P34(2)P24(2)x2P21(2)yP11(2)x2P22(2)yP12(2)x2P23(2)y2P13(2)x2P24(2)y2P14(2)](XYZ1)=0,
(5)
where Pij(n) denotes each element of P1 or P2 matrix. Similarly, this equation can be solved by SVD.

Now consider a ray that is back-projected from point a1 to the 3D scene (A″–A′–A) in Fig. 2. Given a point a1 at the image plane, we want to find a set of points that construct a ray passing through the Camera center C1. To construct a ray in space, we need two points. One is the Camera center C1, and the other point can be obtained from Eq. (1) as,
A=P1+a1.
(6)
P+ is pseudoinverse of P. Since PP+=I, a point P1+a1 lies on the ray. This ray is imaged by Camera 2 through Camera center C2 and constructs the line l2. The line l2 can be written as
l2=(P2C1)×(P2P1+a1).
(7)
Since, the projection of C1 to Camera 2 is the epipole e2, Eq. (7) becomes
l2=(e2)×(P2P1+a1)=Fa1.
(8)
The matrix F is called a fundamental matrix in the machine vision community. Since the point a2 lies on the line l2, we can write,
a2Tl2=0.
(9)
From Eqs. (8) and (9),
a2TFa1=0.
(10)
For the point correspondence a1 and a2, the fundamental matrix satisfies the above condition, and this is called an epipolar constraint.

3. Simulation and Experiment Results

A. Hardware Setup

Figure 3 illustrates the system setup. We have two CMOS cameras (4096×3072, 25 ftp) with a pixel size of 6μm×6μm. Three diffuse illumination sources are used in this setup. An on-axis light source is located above the IC package and used as a masking purpose when image processing is performed. A centroid of the reflected light is used to locate x and y coordinates for each bump. Two light sources, Light 1 and Light 2, are used to obtain good contrast images. The angle and height of these two light sources need to be adjusted in order to obtain an optimal image contrast.

Fig. 3. System setup.

B. System Calibration

Fig. 4. (a) Image with radial distortion. Red dots show centroids of crosses; green circles are locations of ideal grids. (b) Image after radial distortion is corrected. Red dots show centroids of crosses; green circles are locations of ideal grids.

C. Measurements

Figure 5 shows the two camera centers and the world coordinate. From the calculated P matrices, (x,y,z) can be determined. The calculated values are (x1,y1,z1)=(0.63,64.9,191.2) and (x2,y2,z2)=(0.12,68.3,188.3) in millimeters. At this camera location, the image field of view is 38mm×28.5mm.

Fig. 5. Camera center and the world coordinates.

Fig. 6. Measurement procedures.

In order to reconstruct Z coordinates, point correspondences should be identified. The first step is to determine corresponding bump pairs between two images. For this purpose, the on-axis light is used. Figure 7 shows the BGA side of IC package sample (top), images using Light 1 (bottom left), and the on-axis light (bottom right).

Fig. 7. BGA side of IC package sample (top), images captured with Light 1 (bottom left), and on-axis light (bottom right).

The image captured by Light 1 shows brighter background reflection from the substrate surface as compared with the image captured by the on-axis light. If there is background reflection that has similar intensity values when compared to the bumps, each ball cannot be isolated properly. This is why the image with the on-axis light is needed for bump masking. Figure 8 shows the masked image. The BGA image with the on-axis light is used to make the mask and is then applied to the image captured with Lights 1 and 2.

Fig. 8. Masked image. Image is captured with Light 1, while the mask is based on the on-axis light image.

Figure 9 is the masked image with bump numbers. The top image is from Camera 1 and the bottom is from Camera 2. Because the cameras look at the object from different angles, labels between the two images do not match each other and, thus, reordering process is necessary in order to have the same labeling between the two images.

Fig. 9. Masked image with bump numbers. Camera 1 (top) and Camera 2 (bottom).

D. Substrate Warpage Measurement

Once the corresponding bump pairs between the two images are determined, a substrate warpage measurement can be performed. Since there are no specific features or texture on the substrate that can be used for locating point correspondences, the ball edge is used for obtaining these pairs. At first, a fundamental matrix F is calculated using point correspondences obtained from the edge of each bump as illustrated in Fig. 10. A y coordinate of each edge is determined as the position, where each ball has the maximum diameter. An x coordinate is defined from the intensity profile of this y cross section by using an intensity threshold.

Fig. 10. Bump image with the edge locations shown in the red dots.

Once the fundamental matrix F is obtained, point correspondences on the substrate can be calculated. Figure 11 illustrates how to determine these pairs.

Fig. 11. Images of the Camera 1: the red dot is the point defined from the ball edge (top); Camera 2: the red line shows the epipolar line calculated from F matrix, and the green dot indicates the corresponding point (bottom).

At first, a reference point shown as the red dot is chosen from Camera 1 image (top). The coordinates of this reference point are defined from the edge locations previously determined, as a result the y coordinate of the red dot and green arrow (max diameter) are identical. The x coordinate of the red dot is defined as 8 pixels away from the edge of the ball in this case. Once the point on Camera 1 is defined, we can calculate an epipolar line by using Eq. (8). We know that a corresponding point should be somewhere along this line. To identify this point, again, the y coordinate is chosen from the maximum ball diameter position in the Camera 2 image, and the green dot is the corresponding point. Since the reference points can be defined at each side of the ball, we have two reference points for each ball.

Once point correspondences are defined, we can calculate the Z coordinates. First should be noted that the disparity of the substrate changes slowly almost everywhere, or in other words, the substrate surface should be smooth. Thus, to calculate the Z coordinate of a single point, we take an average of nearest four points around it. We define the substrate warpage as follows:
Warpage=15[(H1+H2++H5)(L1+L2++L5)].
(11)
Hn indicates the five largest Z values, and Ln is the five smallest Z values from measurements. Figure 12 illustrates the 3D profiles of the IC package and clearly shows the warped shape. The dimension of this sample is 38mm×28.5mm. Each dot indicates the Z coordinate of the sampled points. The color plane shows the regression plane based on the Z coordinates.

Fig. 12. 3D warpage scatter plot with regression plane.

For evaluating our results, we use a confocal microscope as our reference. The measurements are repeated 35 times consecutively. The mean substrate warpage with 1σ is 226.8±4.2μm based on our system and 215.2 μm based on our reference confocal microscope. The measurement bias is about 11 μm for this IC package. Another metric for evaluating the system performance is the linearity between our system and the reference. One of the parameters for measuring linearity is a correlation coefficient that is defined as follows:
ρ=cov(X,Y)σXσY,
(12)
where cov is the covariance and σ is the standard deviation. X is the set of data from our system and Y is that of the reference tool. Figure 13 is the point-to-point correlation plot between the two systems. The blue centerline shows the regression line with a correlation coefficient with 1σ of 0.968±0.002. The two black lines illustrate the 2σ upper and lower limits with 25.15±0.20μm.

Fig. 13. Correlation between our system and the reference confocal tool.

E. BGA Coplanarity Measurement

In order to determine the BGA coplanarity bump heights should be calculated. We use a 3D bump model to estimate bump heights. It is modeled as a hemi-ellipsoid as shown in Fig. 14. A single ball area is defined as 100pixels×100pixels, which is the same as the real image size captured by the cameras. From the P matrices obtained from the experiment and X, Y, Z coordinates of the model, we can calculate expected 2D captured images shown in Fig. 15.

Fig. 14. Simulated bump model.
Fig. 15. Simulated image from the P matrices obtained from the experiment.

The two green circles indicate the edge of the bump, which is determined by the same method used in the warpage measurement, and a straight line defines the diameter in pixels. From this model the relationship between the ball height and the diameter in pixels can be obtained, which is shown in Fig. 16.

Fig. 16. Relationship between the ball height and the ball diameter in pixels.

From the two edge locations determined for the warpage measurements, we can obtain the diameter of each ball and convert it to a ball height using this relationship. To reconstruct the BGA coplanarity distribution, the calculated ball heights are added to the Z coordinates of the substrate warpage. The results are shown in Figs. 17 and 18.

Fig. 17. 3D coplanarity scatter plot.
Fig. 18. Correlation between our system and the reference confocal tool.

The mean BGA coplanarity with 1σ is 259.7±3.7μm based on our system and 222.8 μm based on our reference confocal microscope. The correlation coefficient with 1σ is 0.957±0.001 and 2σ is 28.79±0.14μm. Since BGA ball heights are estimated based on the model, the system gives measurement outliers if the shape of a ball deviates from the model due to process issues. Thus both correlation coefficient and 2σ for BGA coplanarity are worse than these two parameters for warpage measurement. Yet the proposed method gives approximately the same standard deviation as the substrate warpage measurement. For obtaining two measurement results (Figs. 12 and 17) from a series of raw images, the execution time with 1σ by a laptop (Intel Core i7 2.4 GHz, 8 GB of memory) in a MATLAB environment is 69.2±1.0s. We have validated that the proposed method works for IC package sample with concave warpage by measuring 30 different samples.

To evaluate the effect of BGA surface reflectivity for height reconstruction, two different illumination conditions are compared. Figure 19 shows the identical bump with nominal intensity (left) and brighter illumination (right) to create intensity saturation at the top of the BGA ball. The mean pixel diameter difference ((R1-L1)-(R2-L2)) with 1σ between two illumination conditions is 0.14±0.69 pixels among randomly chosen 35 different bumps. From Fig. 16, 0.8 pixel diameter corresponds to 4 μm height.

Fig. 19. Bump image with different illumination conditions.

4. Conclusion

We have demonstrated a method for substrate warpage and BGA coplanarity inspection using the stereo-vision system. This system allows fast full-field measurements, which is suitable for inline backend inspection and process monitoring. For evaluating the performance of our system, the particular IC sample is measured 35 times and compared with the reference confocal microscope. The mean substrate warpage with 1σ is 226.8±4.2μm based on our system and 215.2 μm based on our reference confocal microscope. The measurement bias is about 11 μm for this IC package. The correlation coefficient is 0.968±0.002 and the 2σ difference in the two methods is 25.15±0.20μm for the warpage measurement. The mean BGA coplanarity is 259.7±3.7μm based on our system and 222.8 μm based on our reference confocal microscope. The bump level correlation coefficient for BGA coplanarity is 0.957±0.001 and the 2σ difference is 28.79±0.14μm. A data acquisition takes about 0.2 s for the full field measurements.

The authors gratefully acknowledge the support of Intel Corporation.

References

1.

W. D. Brown, Electronic Packaging (IEEE, 2006).

2.

W. J. Greig, Integrated Circuit Packaging, Assembly and Interconnections (Springer, 2007).

3.

Texas Instruments, “Flip chip ball grid array package reference guide” (2005), http://www.ti.com/lit/ug/spru811a/spru811a.pdf.

4.

H. Tsukahara, Y. Nishiyama, F. Takahashi, and T. Fuse, “High-speed solder bump inspection system using a laser scanner and CCD Camera,” Systems and Computers in Japan 31, 94–102 (2000). [CrossRef]

5.

P. Kim and S. Rhee, “Three-dimensional inspection of ball grid array using laser vision system,” IEEE Trans. Electron. Packag. Manufact. 22, 151–155 (1999). [CrossRef]

6.

H. N. Yen and D. M. Tsai, “A fast full-field 3D measurement system for BGA coplanarity inspection,” Int. J. Adv. Manuf. Technol. 24, 132–139 (2004). [CrossRef]

7.

V. Bartulovic, M. Lucic, and G. Zacek, “Inspection of ball grid arrays (BGA) by using shadow images of the solder balls,” U.S. Patent6,177,682 B1 (23January2001).

8.

D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science 194, 283–287 (1976). [CrossRef]

9.

U. R. Dhond and J. K. Aggarwal, “Structure from stereo—a review,” IEEE Trans. Syst. Man Cybern. 19, 1489–1510 (1989). [CrossRef]

10.

M. Z. Brown, D. Burschka, and G. D. Hager, “Advances in computational stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 993–1008 (2003). [CrossRef]

11.

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3, 323–344 (1987). [CrossRef]

12.

Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proc. 7th Int. Conference on Computer Vision (IEEE, 1999), pp. 666–673.

13.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [CrossRef]

14.

P. Luo, Y. Chao, and M. Sutton, “Application of stereo vision to three-dimensional deformation analyses in fracture experiments,” Opt. Eng. 33, 981–990 (1994). [CrossRef]

15.

J. J. Aguilar, F. Torres, and M. A. Lope, “Stereo vision for 3D measurement: accuracy analysis, calibration and industrial applications,” Measurements 18, 193–200 (1996). [CrossRef]

16.

C. J. Tay, X. Kang, C. Quan, X. Y. He, and H. M. Shang, “Height measurement of microchip connecting pins by use of stereovision,” Appl. Opt. 42, 3827–3831 (2003). [CrossRef]

17.

Y. J. Xiao and Y. F. Li, “Optimized stereo reconstruction of free-form space curves based on a nonuniform rational B-spline model,” J. Opt. Soc. Am. A 22, 1746–1762 (2005). [CrossRef]

18.

Z. Ren and L. Cai, “Three-dimensional structure measurement of diamond crowns based on stereo vision,” Appl. Opt. 48, 5917–5932 (2009). [CrossRef]

19.

Z. Ren, J. Liao, and L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. 49, 1789–1801 (2010). [CrossRef]

20.

Z.-Z. Tang, J. Liang, Z. Xial, C. Guo, and G. Hu, “Three-dimensional digital image correlation system for deformation measurement in experimental mechanics,” Opt. Eng. 49, 103601 (2010). [CrossRef]

21.

C. J. Tay, X. He, X. Kang, C. Quan, and H. M. Shang, “Coplanarity study on ball grid array packaging,” Opt. Eng. 40, 1608–1612 (2001). [CrossRef]

22.

M. Dong, R. Chung, E. Y. Lam, and K. S. M. Fung, “Height inspection of wafer bumps without explicit 3-D reconstruction,” IEEE Trans. Electron. Packag. Manufact. 33, 112–121 (2010). [CrossRef]

23.

C. Steger, Handbook of Machine Vision (Wiley-VCG, 2006).

24.

J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proc. Computer Vis. Patt. Recog.1106–1112 (1997).

25.

K. F. Riley, M. P. Hobson, and S. J. Bence, “Matrices and vector spaces,” in Mathematical Methods for Physics and Engineering (Cambridge University, 2002).

26.

R. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580–593 (1997). [CrossRef]

27.

R. Hartley, “Triangulation,” Comput. Vis. Image Underst. 68, 146–157 (1997). [CrossRef]

OCIS Codes
(110.0110) Imaging systems : Imaging systems
(150.0150) Machine vision : Machine vision
(150.3040) Machine vision : Industrial inspection
(150.6910) Machine vision : Three-dimensional sensing
(150.5495) Machine vision : Process monitoring and control

ToC Category:
Machine Vision

History
Original Manuscript: January 30, 2014
Revised Manuscript: April 5, 2014
Manuscript Accepted: April 8, 2014
Published: May 9, 2014

Virtual Issues
Vol. 9, Iss. 7 Virtual Journal for Biomedical Optics

Citation
Takeshi Nakazawa and Ayman Samara, "Three-dimensional inline inspection for substrate warpage and ball grid array coplanarity using stereo vision," Appl. Opt. 53, 3101-3109 (2014)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-53-14-3101


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. W. D. Brown, Electronic Packaging (IEEE, 2006).
  2. W. J. Greig, Integrated Circuit Packaging, Assembly and Interconnections (Springer, 2007).
  3. Texas Instruments, “Flip chip ball grid array package reference guide” (2005), http://www.ti.com/lit/ug/spru811a/spru811a.pdf.
  4. H. Tsukahara, Y. Nishiyama, F. Takahashi, and T. Fuse, “High-speed solder bump inspection system using a laser scanner and CCD Camera,” Systems and Computers in Japan 31, 94–102 (2000). [CrossRef]
  5. P. Kim and S. Rhee, “Three-dimensional inspection of ball grid array using laser vision system,” IEEE Trans. Electron. Packag. Manufact. 22, 151–155 (1999). [CrossRef]
  6. H. N. Yen and D. M. Tsai, “A fast full-field 3D measurement system for BGA coplanarity inspection,” Int. J. Adv. Manuf. Technol. 24, 132–139 (2004). [CrossRef]
  7. V. Bartulovic, M. Lucic, and G. Zacek, “Inspection of ball grid arrays (BGA) by using shadow images of the solder balls,” U.S. Patent6,177,682 B1 (23January2001).
  8. D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science 194, 283–287 (1976). [CrossRef]
  9. U. R. Dhond and J. K. Aggarwal, “Structure from stereo—a review,” IEEE Trans. Syst. Man Cybern. 19, 1489–1510 (1989). [CrossRef]
  10. M. Z. Brown, D. Burschka, and G. D. Hager, “Advances in computational stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 993–1008 (2003). [CrossRef]
  11. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3, 323–344 (1987). [CrossRef]
  12. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proc. 7th Int. Conference on Computer Vision (IEEE, 1999), pp. 666–673.
  13. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [CrossRef]
  14. P. Luo, Y. Chao, and M. Sutton, “Application of stereo vision to three-dimensional deformation analyses in fracture experiments,” Opt. Eng. 33, 981–990 (1994). [CrossRef]
  15. J. J. Aguilar, F. Torres, and M. A. Lope, “Stereo vision for 3D measurement: accuracy analysis, calibration and industrial applications,” Measurements 18, 193–200 (1996). [CrossRef]
  16. C. J. Tay, X. Kang, C. Quan, X. Y. He, and H. M. Shang, “Height measurement of microchip connecting pins by use of stereovision,” Appl. Opt. 42, 3827–3831 (2003). [CrossRef]
  17. Y. J. Xiao and Y. F. Li, “Optimized stereo reconstruction of free-form space curves based on a nonuniform rational B-spline model,” J. Opt. Soc. Am. A 22, 1746–1762 (2005). [CrossRef]
  18. Z. Ren and L. Cai, “Three-dimensional structure measurement of diamond crowns based on stereo vision,” Appl. Opt. 48, 5917–5932 (2009). [CrossRef]
  19. Z. Ren, J. Liao, and L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. 49, 1789–1801 (2010). [CrossRef]
  20. Z.-Z. Tang, J. Liang, Z. Xial, C. Guo, and G. Hu, “Three-dimensional digital image correlation system for deformation measurement in experimental mechanics,” Opt. Eng. 49, 103601 (2010). [CrossRef]
  21. C. J. Tay, X. He, X. Kang, C. Quan, and H. M. Shang, “Coplanarity study on ball grid array packaging,” Opt. Eng. 40, 1608–1612 (2001). [CrossRef]
  22. M. Dong, R. Chung, E. Y. Lam, and K. S. M. Fung, “Height inspection of wafer bumps without explicit 3-D reconstruction,” IEEE Trans. Electron. Packag. Manufact. 33, 112–121 (2010). [CrossRef]
  23. C. Steger, Handbook of Machine Vision (Wiley-VCG, 2006).
  24. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proc. Computer Vis. Patt. Recog.1106–1112 (1997).
  25. K. F. Riley, M. P. Hobson, and S. J. Bence, “Matrices and vector spaces,” in Mathematical Methods for Physics and Engineering (Cambridge University, 2002).
  26. R. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580–593 (1997). [CrossRef]
  27. R. Hartley, “Triangulation,” Comput. Vis. Image Underst. 68, 146–157 (1997). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited