OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 20, Iss. 22 — Oct. 22, 2012
  • pp: 24382–24393
« Show journal navigation

Robust laser speckle recognition system for
authenticity identification

Chia-Hung Yeh, Po-Yi Sung, Chih-Hung Kuo, and Ruey-Nan Yeh  »View Author Affiliations


Optics Express, Vol. 20, Issue 22, pp. 24382-24393 (2012)
http://dx.doi.org/10.1364/OE.20.024382


View Full Text Article

Acrobat PDF (2823 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper proposes a laser speckle recognition system for authenticity verification. Because of the unique imperfection surfaces of objects, laser speckle provides identifiable features for authentication. A Gabor filter, SIFT (Scale-Invariant Feature Transform), and projection were used to extract the features of laser speckle images. To accelerate the matching process, the extracted Gabor features were organized into an indexing structure using the K-means algorithm. Plastic cards were used as the target objects in the proposed system and the hardware of the speckle capturing system was built. The experimental results showed that the retrieval performance of the proposed method is accurate when the database contains 516 laser speckle images. The proposed system is robust and feasible for authenticity verification.

© 2012 OSA

1. Introduction

Because of an increasing emphasis on personal privacy, security, and convenience, authenticity verification systems are a crucial issue in academic and industrial fields. To ensure that only legitimate users have access, the systems of service providers require secure and reliable schemes to effectively identify the individual requesting the services. Possible applications include secure access to buildings, entrance guards, computer systems, and ATMs. Although transitional magnetic and IC cards can achieve this purpose, they can be easily duplicated using current technology [1

1. S. E. Sarma, S. A. Weis, and D. W. Engels, “RFID systems and security and privacy implications,” in Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems, B. S. Kaliski Jr, Ç. K. Koç and C. Paar, eds. (Springer, 2002), 454–469.

4

4. M. Shridhar, J. W. V. Miller, G. Houle, and L. Bijnagte, “Recognition of license plate images: issues and perspectives,” in Proceedings of the Fifth International Conference on Document Analysis and Recognition, (Academic, Bangalore, 1999), 17–20.

]. Therefore, the development of other objects that contain a unique physical identity code to secure access is vital.

Optical technology is used in data security and identity verification [5

5. C. H. Yeh, H. T. Chang, H. C. Chien, and C. J. Kuo, “Design of cascaded phase keys for a hierarchical security system,” Appl. Opt. 41(29), 6128–6134 (2002). [CrossRef] [PubMed]

, 6

6. A. K. Ghosh, P. Verma, S. Cheng, and A. Venugopalan, “A free space optics based identification and interrogation system,” in Proceedings of IEEE International Conference on Technologies for Homeland Security (Institute of Electrical and Electronics Engineers, Woburn, 2007), 25–27.

] to solve the duplication issue of transitional magnetic and IC cards. It has the advantage of no RF interference from nearby objects and is more secure because of the highly directional property of optical communication. Data encryption technology is also used to provide an additional layer of security. However, the disadvantage of optical identification is environmental factors, such as fog, rain, and the lack of line of sight information. Biometric recognition [7

7. A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recognition,” IEEE Trans. Circ. Syst. Video Tech. 14(1), 4–20 (2004). [CrossRef]

], also called biometrics, has received considerable attention for security access applications in the past decade. Biometrics uses body characteristics, such as fingerprints, faces, irises, and voices. Fingerprint recognition [8

8. A. K. Jain, S. Lin Hong, Pankanti, and R. Bolle, “An identity-authentication system using fingerprints,” Proc. IEEE 85(9), 1365–1388 (1997). [CrossRef]

10

10. R. Zhou, S. W. Sin, D. Li, T. Isshiki, and H. Kunieda, “Adaptive SIFT based algorithm for specific fingerprint verification,” in Proceedings of IEEE International Conference on Hand-based Biometrics (Institute of Electrical and Electronics Engineers, Hong Kong, 2011), 1–6.

] is one of the oldest methods to recognize or verify people, because each person has a unique fingerprint structure that consists of ridges and valleys. Face recognition [11

11. R. Brunelli and T. Poggio, “Face recognition: features versus templates,” IEEE Trans. Pattern Anal. Mach. Intell. 15(10), 1042–1052 (1993). [CrossRef]

13

13. C. Liu, “Gabor-based kernel PCA with fractional power polynomial models for face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 572–581 (2004). [CrossRef] [PubMed]

] techniques identify faces by extracting facial features, such as the relative position, size, eyes, lips, nose, jaw, and cheekbones. The iris region contains visual texture, which carries distinctive information that is useful for personal identification [14

14. R. P. Wildes, “Iris recognition: an emerging biometric technology,” Proc. IEEE 85(9), 1348–1363 (1997). [CrossRef]

16

16. J. G. Daugman, “How iris recognition works,” IEEE Trans. Circ. Syst. Video Tech. 14(1), 21–30 (2004). [CrossRef]

]. Biometrics techniques use the body of a person as a key; therefore, it is unnecessary to purchase a passport, smart card, ID card, or key. Each body part has a unique identity to achieve personal identification. However, biometrics requires numerous data to be stored for a person, and these types of systems are not always reliable because humans change over time. In addition, biometrics is expensive and the design of such systems is complex.

The speckle effect is observed when coherent light is scattered from a rough surface. If the surface is sufficiently rough to create path length differences, the intensity of the resultant light varies randomly. Buchanan et al. [17

17. J. D. R. Buchanan, R. P. Cowburn, A. V. Jausovec, D. Petit, P. Seem, G. Xiong, D. Atkinson, K. Fenton, D. A. Allwood, and M. T. Bryan, “Forgery: ‘fingerprinting’ documents and packaging,” Nature 436(7050), 475 (2005). [CrossRef] [PubMed]

] found that almost all surfaces of items are not smooth using microscopic examination, as shown in Fig. 1
Fig. 1 Microscopic patterns on the surface of paper [17].
. Therefore, they used a scanner with a low power focused laser beam to scan the surface of the item to be identified. Microscopic irregularities (Fig. 1) on the surface result in complex scattering, and an optical phenomenon of “speckle” forms a unique physical identity that can be regarded as an identification feature. The advantages of laser-based identification systems are as follows: first, it is a noncontact-type system and can avoid the card surface from rubbing against the steel plate, which maintains the identification accuracy. Second, the device is low cost and can be miniaturized. It is an affordable and practical solution. Third, its uniqueness can be used to prevent counterfeiting and provide a solution for the increasing security requirements for information.

Buchanan et al. used cross-correlation to determine whether two scans were from the same sheet of paper. A strong peak indicates that two scans are identical. The absence of a strong peak indicates that two scans are independent of each other. However, in this system, the cross-correlation does not work effectively if slight displacement occurs in acquisition. Instead of a one-dimensional signal, the proposed system acquires two-dimensional signals of laser speckles of plastic cards for authentication. Figure 2
Fig. 2 Plastic cards and their laser speckles.
shows the plastics card used in our experiments and the speckles captured from the proposed device.

This study developed an optical speckle capturing device and the recognition method. Using the developed prototype, laser speckle patterns were captured from plastic cards. Laser speckle is sensitive to the displacement of the plastic cards, even a slight shift and rotation. Therefore, instead of only cross-correlation, more robust features are required to identify laser speckles. We extracted the features of laser speckles using a Gabor filter, SIFT (Scale-Invariant Feature Transform), and projection. A Gabor filter, which captures global details of laser speckle images, was used to organize the database into an indexing structure using the K-means algorithm to accelerate the matching process to make the proposed system feasible for real-time applications. SIFT can provide robust object identification for shift, scale, rotation, and geometry distortion, and projection features were used to verify cases that were not easily recognized by SIFT. These features are expected to be against the shift and rotation of the plastic cards when acquired. The experimental results showed that the speckle pattern of a material captured by our device was invariant after slight displacement, and the speckle patterns captured from various cards of the same material were distinct. In addition, the proposed system can identify whether a test card already exists in the database. The identification accuracy of our proposed method was 100% when the database contained 170 plastic cards.

The remainder of this paper is organized as follows: Section 2 presents the details of our proposed system and the identification process; Section 3 provides the experimental results to evaluate the performance of the proposed method; and lastly, Section 4 offers a conclusion.

2. Proposed laser speckle identification system

2.1 Speckle capturing device development

The proposed laser speckle capturing device is shown in Fig. 3
Fig. 3 The prototype of the proposed laser speckle capturing device (a) appearance, (b) interior, and (c) bird’s eye view with blue plastic card.
. The dimensions of the device is 300☓100☓200 mm (length☓width☓height). The capturing device consists of a CCD (Charge-Coupled Device) camera equipped with an aperture and lens to record the reflected laser speckle pattern from the plastic cards, as shown in Fig. 4(a)
Fig. 4 Components of the capturing device (a) digital camera, and (b) laser source.
. A laser diode was used to generate a highly coherent light, as shown in Fig. 4(b). The wavelength of the laser was 635 nm and the resolution of the image sensor was 480 x 640 pixels with a pixel size 8.3 um☓8.3 um. The actual coverage area of the plastic cards captured from the image sensor was approximately 4687 um × 6249 um, because the magnification ratio of the speckle capturing device was 0.85 (4687 um = 480 × 8.3 um × 10.85, 6249 um = 640 × 8.3 um × 10.85).

The optical configuration of the laser speckle capturing device is shown in Fig. 5
Fig. 5 Optical configuration of the laser speckle capturing device.
. The angles between the capturing device and the card surface and that between the laser diode and the card surface are notated as φ and θ, respectively. The angle φ is in the range of θ+10ºφ<90º. In our prototype, we select φ as 45º and θ as 30º. The plastic card illustrated by a highly coherent light emitted from a laser diode resulted in scattered lights from the card surface. The scattered lights subsequently pass through the aperture and lens in front of the image sensor, resulting in a diffractive effect. The diffractive effect produces several bright spots, which interfere with each other. These bright spots generate a distribution of bright and dark spots by constructive and destructive interference, respectively. Therefore, the distribution of the bright and dark spots forms a speckle pattern on the image sensor and is invariant after slight displacement.

2.2 Feature extraction for speckle recognition

Feature extraction is the process of dimensionality reduction in which the input data are transformed into a reduced representation set of features (also called feature vectors). Therefore, the selected features must efficiently represent the original data for recognition purposes. In our proposed recognition system, a Gabor filter, SIFT, and projection processes were used to extract features from the captured laser speckles. These features sets extract relevant information from laser speckles for identification using reduced representation instead of the full size data.

2.2.1 Gabor filter features

2.2.2 SIFT features

m(x,y)=(L(x+1,y)L(x1,y))2+(L(x,y+1)L(x,y1))2
(6)
θ(x,y)=tan1(L(x,y+1)L(x,y1)L(x+1,y)L(x1,y)).
(7)

Finally, each key-point is represented based on a patch of pixels in its local neighborhood. Figure 7
Fig. 7 Illustration of the computation of the key-point descriptor.
shows SIFT key-point descriptor of the laser speckle images. In the left side of Fig. 7, the arrows in a 8×8 set of samples represent the gradient magnitude and orientation computed at each sample in a region around the key-point location. The circular window represents a Gaussian function used to assign a weight to the magnitude of each sample point. As shown in the right side of Fig. 7, the samples in each 4×4 subregions are accumulated into eight directions orientation histogram to form a 2×2 descriptor array. The length of each arrow corresponds to the magnitudes of the accumulated gradient within the subregion. Figure 8
Fig. 8 SIFT key-points of the laser speckle images.
shows the feature points extracted by SIFT.

2.2.3 Projection feature

The matching of SIFT key-points may encounter a problem in which two speckles differ but have a similar number of key-points. In such a scenario, the SIFT cannot determine the most similar speckle.

The projection features shown in Fig. 9
Fig. 9 Projection features (a) original speckle image (b) projection along x-axis, and (c) projection along y-axis.
were used to solve this problem. The speckles were projected to x and y axes to obtain the features shown in Fig. 9(b) and Fig. 9(c), respectively. After calculating the correlation of concatenated x and y projection features, we selected the speckle with the highest correlation. Finally, the speckle was verified by a threshold; the speckle must pass the predefined threshold to be regarded as a recognition result. Equation (8) shows the 1-D correlation formula, where i is dimensions of the vector. A¯ and B¯ are the mean of Ai and Bi, respectively.
r=i(AiA¯)(BiB¯)i(AiA¯)2(BiB¯)2
(8)
Figures 10
Fig. 10 Projection feature correlation of the same card in (a) x, and (b) y directions.
and 11
Fig. 11 Projection feature correlation of the different cards in (a) x, and (b) y directions.
show the correlation of the projection feature of the same card and differing cards, respectively. No strong peak occurred, which indicates that two speckles differ; otherwise the two speckles are identical.

2.3 Database organization

The proposed system may be applied to attendance signatures and can accommodate a substantial user flow during a certain period of time. Therefore, the system must process a large amount of data in real time. Efficient database organization is crucial for increasing the feasibility of the system. A coarse-to-fine strategy is suitable for matching the target speckle in the data set in real time environments with time constraints. In the first step, the K-means algorithm was used to organize the Gabor filter features into an indexing structure. If the database is clustered into n groups, the matching process is only performed on the group in which the representative feature vector is more similar to that of the target feature vector. Therefore, the matching time can be reduced considerably. Subsequently, the candidate speckles in the selected group are verified by SIFT to identify whether these candidates have no record in the database. Finally, the project features are used to select the feature that is most similar to the target speckle captured from the system.

2.4 Identification process

The flowchart of the identification process is shown in Fig. 12
Fig. 12 Flowchart of the proposed identification system.
. During the identification process, the CCD camera captures the laser speckle image of the target plastic card to be identified. To accelerate the matching process, the group with the minimal Euclidean distance between its representative feature vector and that of the target speckle was selected from the database. Subsequently, all speckle images in the selected group were fed to the SIFT feature matching process to compares their key-points with that of the target speckle (the acquired laser speckle image). In the SIFT matching process, the speckle that has the most matching key-points is the final recognition result. When two or more speckles have the same number of key-points, the correlation of the project feature between each of these speckles and the target speckle is used to select the optimal match that has the largest correlation value. If the key-points of all speckles in the selected group are lower than a predefined threshold, the target speckle is identified as the unregistered item.

3. Experimental results

A total of 170 plastic cards were used to test the performance of the proposed recognition system. The detailed specification of the laser speckle acquiring device is shown in Table 1

Table 1. Laser speckle acquiring device specification

table-icon
View This Table
| View All Tables
. When the laser beam illustrates the rough surface of the card, reflected light is scattered in all directions, the CCD captures the reflected light, and its speckle pattern is observed in the image plane. Each card was captured three times; therefore, we obtained a total of 516 laser speckle images. Two speckle images from each card (total 340 speckle images) were used to train the database, and the other 170 speckle images outside the training set were used to test the database. Two additional cards were used to test whether our proposed algorithm can identify them as new items for the system.

In the experiments, a total of 170 testing speckles matched the correct speckles in the database; therefore, the identification accuracy was 100% when the database contained 170 registered cards. Figure 13
Fig. 13 Similarity versification between the target speckle and the correct match speckle in the database.
shows the cosine similarity between the target speckle and the speckle in the database recognized by the proposed method. The cosine similarity is used to measure the similarity between two vectors by the cosine angle between them, and is defined as
cosinesimilarity(X,Y)=XY|X||Y|,
(9)
where X and Y represent the target speckle and each speckle in the database. As shown in Fig. 13, the proposed system is robust to identify the speckle of the plastic card captured by our proposed device.

We also tested our system with the unregistered cards. In the SIFT key-point matching process, if the number of matching key-points between the target speckle and all speckles in the database is lower than a predefined threshold, the target speckle is identified as the unregistered card. The two cards, numbered 170 and 171, that were not in the database were used to test our recognition method, and each card was captured three times. The proposed method successfully recognized the unregistered card.

We compared the accuracy and computational complexity of our proposed system with that by Buchanan et al. Buchanan et al. used cross-correlation to identify the similarity of two speckles. We used the same database to evaluate the performance of the system by Buchanan et al. Three speckles are generated for each card, and one speckle was used as the target speckle and the other two speckles were used in the database for matching. Table 2

Table 2. Accuracy identification comparisons of the proposed method and Buchanan et al.’s method

table-icon
View This Table
| View All Tables
shows the identification accuracy when no shifting (original position) and shifting a target card to different distances such as 1 mm, and 2mm from its original position (pull it out). The accuracy of the cross-correlation method drops from 96.47% to 81.17% when the test card has 1mm displacement while the proposed method still achieves 100% accuracy. The cross-correlation method is far from accurate when the test card has 2mm displacement but the proposed method still has 71.17% accuracy. As shown in Table 2, we also test the robustness to rotation of the cross-correlation and the proposed method. Each test card is rotated by 2 degree from its original position. The accuracy of the cross-correlation method decreases to 61.76% while the proposed method still achieves 88.82% accuracy. These experiments show the proposed method is more robust to displacement and rotation than the cross-correlation method. Cross-correlation was effective in measuring the similarity of two signals. However, the plastic card had slight movement during the capturing process, and the speckles captured at various times were not identical. The proposed method extracts the useful features from laser speckles against the slight movement of plastic cards when capturing. Table 3

Table 3. Average recognition time of the proposed method and Buchanan et al.’s method

table-icon
View This Table
| View All Tables
shows the average recognition time comparisons of the proposed method and that by Buchanan et al. (2D-correlation). In our experiments, the testing platform was a desktop computer with Intel (R) Core(TM) Quad CPU Q6600 @2.4GHz and 2GB RAM, and the operation system was Microsoft Windows XP. The average time in recognizing one speckle using the proposed method was 16.01 s, which is 41% faster than the 27.15 s of the method by Buchanan et al. The proposed system organizes the features into an indexing structure to accelerate the speckle matching process. However, the current processing time is insufficient in the current standard because people cannot wait 16 s for authenticity identification for entry or obtaining resources. We will implement our method in the embedding system and combine the capturing device to make the system more feasible for real time applications.

4. Conclusions

This paper proposes a laser speckle recognition system for authenticity verification. We built a prototype to capture the laser speckles of plastic cards. Several features, including a Gabor filter, SIFT, and projection were extracted from laser speckles against the changes of speckles caused from a slight displacement of cards when capturing. The indexing structure was built for the database to accelerate the matching process. The experimental results show that the proposed device can capture the laser speckles of plastic cards, and the recognition method has high identification accuracy. The proposed method exhibits superior performance to that of Buchanan et al. (2D-coorelation) regarding identification accuracy and time. In the near future, we will implement our method on the embedding system to reduce the required time of identification and test other materials, such as paper, to extend the range of applications of the laser speckles for authenticity verification.

Acknowledgments

This work was supported by the Chung-Shan Institute of Science and Technology, Taiwan, under Grants XV98K61P134PE, CSIST-757-V204, and CSIST-757-V305.

References and links

1.

S. E. Sarma, S. A. Weis, and D. W. Engels, “RFID systems and security and privacy implications,” in Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems, B. S. Kaliski Jr, Ç. K. Koç and C. Paar, eds. (Springer, 2002), 454–469.

2.

A. Juels, “RFID security and privacy: A research survey,” IEEE J. Sel. Areas Comm. 24(2), 381–394 (2006). [CrossRef]

3.

T. Phillips, T. Karygiannis, and R. Huhn, “Security standards for the RFID market,” IEEE Security Privacy. 3(6), 85–89 (2005). [CrossRef]

4.

M. Shridhar, J. W. V. Miller, G. Houle, and L. Bijnagte, “Recognition of license plate images: issues and perspectives,” in Proceedings of the Fifth International Conference on Document Analysis and Recognition, (Academic, Bangalore, 1999), 17–20.

5.

C. H. Yeh, H. T. Chang, H. C. Chien, and C. J. Kuo, “Design of cascaded phase keys for a hierarchical security system,” Appl. Opt. 41(29), 6128–6134 (2002). [CrossRef] [PubMed]

6.

A. K. Ghosh, P. Verma, S. Cheng, and A. Venugopalan, “A free space optics based identification and interrogation system,” in Proceedings of IEEE International Conference on Technologies for Homeland Security (Institute of Electrical and Electronics Engineers, Woburn, 2007), 25–27.

7.

A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recognition,” IEEE Trans. Circ. Syst. Video Tech. 14(1), 4–20 (2004). [CrossRef]

8.

A. K. Jain, S. Lin Hong, Pankanti, and R. Bolle, “An identity-authentication system using fingerprints,” Proc. IEEE 85(9), 1365–1388 (1997). [CrossRef]

9.

J. Leon, G. Sanchez, G. Aguilar, L. Toscano, H. Perez, and J. M. Ramirez, “Fingerprint verification applying invariant moments,” in Proceedings of IEEE International Midwest Symposium on Circuits and Systems (Institute of Electrical and Electronics Engineers, Cancun, 2009), 751–757.

10.

R. Zhou, S. W. Sin, D. Li, T. Isshiki, and H. Kunieda, “Adaptive SIFT based algorithm for specific fingerprint verification,” in Proceedings of IEEE International Conference on Hand-based Biometrics (Institute of Electrical and Electronics Engineers, Hong Kong, 2011), 1–6.

11.

R. Brunelli and T. Poggio, “Face recognition: features versus templates,” IEEE Trans. Pattern Anal. Mach. Intell. 15(10), 1042–1052 (1993). [CrossRef]

12.

M. H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: a survey,” IEEE Trans. Pattern Anal. Mach. Intell. 24(1), 34–58 (2002). [CrossRef]

13.

C. Liu, “Gabor-based kernel PCA with fractional power polynomial models for face recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 572–581 (2004). [CrossRef] [PubMed]

14.

R. P. Wildes, “Iris recognition: an emerging biometric technology,” Proc. IEEE 85(9), 1348–1363 (1997). [CrossRef]

15.

L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1519–1533 (2003). [CrossRef]

16.

J. G. Daugman, “How iris recognition works,” IEEE Trans. Circ. Syst. Video Tech. 14(1), 21–30 (2004). [CrossRef]

17.

J. D. R. Buchanan, R. P. Cowburn, A. V. Jausovec, D. Petit, P. Seem, G. Xiong, D. Atkinson, K. Fenton, D. A. Allwood, and M. T. Bryan, “Forgery: ‘fingerprinting’ documents and packaging,” Nature 436(7050), 475 (2005). [CrossRef] [PubMed]

18.

J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15(11), 1148–1161 (1993). [CrossRef]

19.

J. G. Daugman, “Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression,” IEEE Trans. Acoust. Speech Signal Process. 36(7), 1169–1179 (1988). [CrossRef]

20.

Z. Zheng, F. Yang, W. Tan, J. Jia, and J. Yang, “Gabor feature-based face recognition using supervised locality preserving projection,” J. Signal Proc. 87(10), 2473–2483 (2007). [CrossRef]

21.

W. P. Choi, S. H. Tse, K. W. Wong, and K. M. Lam, “Simplified Gabor wavelets for human face recognition,” J. Pattern Recogn. 41(3), 1186–1199 (2008). [CrossRef]

22.

R. J. Nemati and M. Y. Javed, “Fingerprint verification using filter-bank of Gabor and Log Gabor filters,” in Proceedings of the 15th International Conference on Systems, Signals and Image Processing (Institute of Electrical and Electronics Engineers, Bratislava, 2008), 363–366.

23.

M. L. Wen, Y. Liang, Q. Pan, and H. C. Zhang, “A Gabor filter based fingerprint enhancement algorithm in wavelet domain,” in Proceedings of IEEE International Symposium on Communications and Information Technology (Institute of Electrical and Electronics Engineers, Beijing, 2005), 1468–1471.

24.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” J. Comp. Vis. 60(2), 91–110 (2004). [CrossRef]

25.

R. Song and J. Szymanski, “Well-distributed SIFT features,” J. Electron Lett. 45(6), 308–310 (2009). [CrossRef]

26.

H. Yuning, L. Jing, and L. Chao, “An improved SIFT feature matching algorithm,” in Proceedings of the 8th World Congress on Intelligent Control and Automation (Institute of Electrical and Electronics Engineers, Jinan, 2010), 6109–6113.

27.

F. Alonso-Fernandez, P. Tome-Gonzalez, V. Ruiz-Albacete, and J. Ortega-Garcia, “Iris recognition based on SIFT features,” in Proceedings of IEEE International Conference on Biometrics, Identity and Security (Institute of Electrical and Electronics Engineers, Tampa, 2009), 1–8.

28.

Y. Wang, C. Huang, and X. Qiu, “Multiple facial instance for face recognition based on SIFT features,” in Proceedings of IEEE International Conference on Mechatronics and Automation (Institute of Electrical and Electronics Engineers, Changchun, 2009), 2442–2446.

OCIS Codes
(100.5010) Image processing : Pattern recognition
(110.6150) Imaging systems : Speckle imaging

ToC Category:
Image Processing

History
Original Manuscript: August 20, 2012
Revised Manuscript: September 25, 2012
Manuscript Accepted: October 2, 2012
Published: October 10, 2012

Citation
Chia-Hung Yeh, Po-Yi Sung, Chih-Hung Kuo, and Ruey-Nan Yeh, "Robust laser speckle recognition system for
authenticity identification," Opt. Express 20, 24382-24393 (2012)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-22-24382


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. E. Sarma, S. A. Weis, and D. W. Engels, “RFID systems and security and privacy implications,” in Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems, B. S. Kaliski Jr, Ç. K. Koç and C. Paar, eds. (Springer, 2002), 454–469.
  2. A. Juels, “RFID security and privacy: A research survey,” IEEE J. Sel. Areas Comm.24(2), 381–394 (2006). [CrossRef]
  3. T. Phillips, T. Karygiannis, and R. Huhn, “Security standards for the RFID market,” IEEE Security Privacy.3(6), 85–89 (2005). [CrossRef]
  4. M. Shridhar, J. W. V. Miller, G. Houle, and L. Bijnagte, “Recognition of license plate images: issues and perspectives,” in Proceedings of the Fifth International Conference on Document Analysis and Recognition, (Academic, Bangalore, 1999), 17–20.
  5. C. H. Yeh, H. T. Chang, H. C. Chien, and C. J. Kuo, “Design of cascaded phase keys for a hierarchical security system,” Appl. Opt.41(29), 6128–6134 (2002). [CrossRef] [PubMed]
  6. A. K. Ghosh, P. Verma, S. Cheng, and A. Venugopalan, “A free space optics based identification and interrogation system,” in Proceedings of IEEE International Conference on Technologies for Homeland Security (Institute of Electrical and Electronics Engineers, Woburn, 2007), 25–27.
  7. A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recognition,” IEEE Trans. Circ. Syst. Video Tech.14(1), 4–20 (2004). [CrossRef]
  8. A. K. Jain, S. Lin Hong, Pankanti, and R. Bolle, “An identity-authentication system using fingerprints,” Proc. IEEE85(9), 1365–1388 (1997). [CrossRef]
  9. J. Leon, G. Sanchez, G. Aguilar, L. Toscano, H. Perez, and J. M. Ramirez, “Fingerprint verification applying invariant moments,” in Proceedings of IEEE International Midwest Symposium on Circuits and Systems (Institute of Electrical and Electronics Engineers, Cancun, 2009), 751–757.
  10. R. Zhou, S. W. Sin, D. Li, T. Isshiki, and H. Kunieda, “Adaptive SIFT based algorithm for specific fingerprint verification,” in Proceedings of IEEE International Conference on Hand-based Biometrics (Institute of Electrical and Electronics Engineers, Hong Kong, 2011), 1–6.
  11. R. Brunelli and T. Poggio, “Face recognition: features versus templates,” IEEE Trans. Pattern Anal. Mach. Intell.15(10), 1042–1052 (1993). [CrossRef]
  12. M. H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: a survey,” IEEE Trans. Pattern Anal. Mach. Intell.24(1), 34–58 (2002). [CrossRef]
  13. C. Liu, “Gabor-based kernel PCA with fractional power polynomial models for face recognition,” IEEE Trans. Pattern Anal. Mach. Intell.26(5), 572–581 (2004). [CrossRef] [PubMed]
  14. R. P. Wildes, “Iris recognition: an emerging biometric technology,” Proc. IEEE85(9), 1348–1363 (1997). [CrossRef]
  15. L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell.25(12), 1519–1533 (2003). [CrossRef]
  16. J. G. Daugman, “How iris recognition works,” IEEE Trans. Circ. Syst. Video Tech.14(1), 21–30 (2004). [CrossRef]
  17. J. D. R. Buchanan, R. P. Cowburn, A. V. Jausovec, D. Petit, P. Seem, G. Xiong, D. Atkinson, K. Fenton, D. A. Allwood, and M. T. Bryan, “Forgery: ‘fingerprinting’ documents and packaging,” Nature436(7050), 475 (2005). [CrossRef] [PubMed]
  18. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell.15(11), 1148–1161 (1993). [CrossRef]
  19. J. G. Daugman, “Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression,” IEEE Trans. Acoust. Speech Signal Process.36(7), 1169–1179 (1988). [CrossRef]
  20. Z. Zheng, F. Yang, W. Tan, J. Jia, and J. Yang, “Gabor feature-based face recognition using supervised locality preserving projection,” J. Signal Proc.87(10), 2473–2483 (2007). [CrossRef]
  21. W. P. Choi, S. H. Tse, K. W. Wong, and K. M. Lam, “Simplified Gabor wavelets for human face recognition,” J. Pattern Recogn.41(3), 1186–1199 (2008). [CrossRef]
  22. R. J. Nemati and M. Y. Javed, “Fingerprint verification using filter-bank of Gabor and Log Gabor filters,” in Proceedings of the 15th International Conference on Systems, Signals and Image Processing (Institute of Electrical and Electronics Engineers, Bratislava, 2008), 363–366.
  23. M. L. Wen, Y. Liang, Q. Pan, and H. C. Zhang, “A Gabor filter based fingerprint enhancement algorithm in wavelet domain,” in Proceedings of IEEE International Symposium on Communications and Information Technology (Institute of Electrical and Electronics Engineers, Beijing, 2005), 1468–1471.
  24. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” J. Comp. Vis.60(2), 91–110 (2004). [CrossRef]
  25. R. Song and J. Szymanski, “Well-distributed SIFT features,” J. Electron Lett.45(6), 308–310 (2009). [CrossRef]
  26. H. Yuning, L. Jing, and L. Chao, “An improved SIFT feature matching algorithm,” in Proceedings of the 8th World Congress on Intelligent Control and Automation (Institute of Electrical and Electronics Engineers, Jinan, 2010), 6109–6113.
  27. F. Alonso-Fernandez, P. Tome-Gonzalez, V. Ruiz-Albacete, and J. Ortega-Garcia, “Iris recognition based on SIFT features,” in Proceedings of IEEE International Conference on Biometrics, Identity and Security (Institute of Electrical and Electronics Engineers, Tampa, 2009), 1–8.
  28. Y. Wang, C. Huang, and X. Qiu, “Multiple facial instance for face recognition based on SIFT features,” in Proceedings of IEEE International Conference on Mechatronics and Automation (Institute of Electrical and Electronics Engineers, Changchun, 2009), 2442–2446.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited