OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Vol. 35, Iss. 25 — Sep. 1, 1996
  • pp: 5035–5039

Estimation of optical constants of thin film by the use of artificial neural networks

Yuan-sheng Ma, Xu Liu, Pei-fu Gu, and Jin-fa Tang  »View Author Affiliations


Applied Optics, Vol. 35, Issue 25, pp. 5035-5039 (1996)
http://dx.doi.org/10.1364/AO.35.005035


View Full Text Article

Enhanced HTML    Acrobat PDF (279 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

A system for analyzing single-layer optical thin films has been formulated by the use of artificial neural networks. The training data sets stem from the computational results of the physical model of thin films, and they are used to train the artificial neural network, which, when done, can give values of film parameters in the millisecond time regime. The fast backpropagation algorithm is employed during training. The results of training are also given.

© 1996 Optical Society of America

History
Original Manuscript: November 20, 1995
Revised Manuscript: March 25, 1996
Published: September 1, 1996

Citation
Yuan-sheng Ma, Xu Liu, Pei-fu Gu, and Jin-fa Tang, "Estimation of optical constants of thin film by the use of artificial neural networks," Appl. Opt. 35, 5035-5039 (1996)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-35-25-5035


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J.-F. Tang, Applied Thin Film Optics, 1st ed. (Shanghai Science and Technology, Shanghai, 1984).
  2. P. G. J. Lisboa, Neural and Networks Current Applications (Chapman and Hall, London, 1992).
  3. P. D. Wasserman, Neural Computing, Theory and Practice (Van Nostrand Reinhold, New York, 1989).
  4. A. Sperduti, A. Atarita, “Speed up learning and network optimization with extended back propagation,” Neural Networks 6, 365–383 (1993). [CrossRef]
  5. Y. Ito, “Representation of functions by superpositions of a step or sigmoid function and their application to neural network theory,” Neural Networks 4, 385–394 (1991). [CrossRef]
  6. H. White, Artificial Neural Networks Approximation and Learning Theory (Blackwell, Oxford, 1992).
  7. D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representation by back-propagation errors,” Nature (London) 323, 583–536 (1986). [CrossRef]
  8. P. Tawel, “Does the neuron learn like the synapse?” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 169–176.
  9. G. Mirchandani, W. Cao, “On hidden nodes for neural nets,” IEEE Trans. Circuits Syst. 36, 661–664 (1989). [CrossRef]
  10. R. A. Jacobs, “Increased rates of convergence through learning rate adaptation,” Neural Networks 1, 295–307 (1988). [CrossRef]
  11. D. E. Rumelhart, J. L. McClelland, Parallel Distributed Processing (MIT, Cambridge, Mass., 1987), Vol. 1.
  12. S. J. Hanson, L. Y. Pratt, “Comparing biases for minimal network construction with back-propagation,” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 177–185.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited