On the learning and convergence of the radial basis networks

Fu-Chuang Chen, Mao Hsing Lin

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

9 Scopus citations

Abstract

Although the radial basis networks have been shown to be able to model any "well behaved" nonlinear function to any desired accuracy, there is no guarantee that the correct networks weights can be learned using any existing training rule. This paper reports a convergence result for training radial basis networks based on a modified gradient descent training rule, which is the same as the standard gradient descent algorithm except that a deadzone around the origin of the error coordinates is incorporated in the training rule. The result says that, if the deadzone size is large enough to cover the modeling error and if the learning rate is seleted within certain range, then the norm of the parameter error will converge to a constant, and the output error between the network and the nonlinear function will converge into a small ball. Simulations are used to verify the theoretical results.

Original languageEnglish
Title of host publication1993 IEEE International Conference on Neural Networks, ICNN 1993
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages983-988
Number of pages6
ISBN (Electronic)0780309995
DOIs
StatePublished - 1 Jan 1993
EventIEEE International Conference on Neural Networks, ICNN 1993 - San Francisco, United States
Duration: 28 Mar 19931 Apr 1993

Publication series

NameIEEE International Conference on Neural Networks - Conference Proceedings
Volume1993-January
ISSN (Print)1098-7576

Conference

ConferenceIEEE International Conference on Neural Networks, ICNN 1993
CountryUnited States
CitySan Francisco
Period28/03/931/04/93

Cite this