We consider the properties of a generalized perceptron learning network, taking into account the decay or the gain of the weight vector during the training stages. A mathematical proof is given that shows the conditional convergence of the learning algorithm. The analytical result indicates that the upper bound of the training steps is dependent on the gain (or decay) factor. A sufficient condition of exposure time for convergence of a photorefractive perceptron network is derived. We also describe a modified learning algorithm that provides a solution to the problem of weight vector decay in an optical perceptron caused by hologram erasure. Both analytical and simulation results are presented and discussed.
© 1994 Optical Society of America
Chau-Jern Cheng, Pochi Yeh, and Ken Yuh Hsu, "Generalized perceptron learning rule and its implications for photorefractive neural networks," J. Opt. Soc. Am. B 11, 1619-1624 (1994)