Topology preserving neural networks that achieve a prescribed feature map probability density distribution

Jongeun Choi, Roberto Horowitz

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

In this paper, a new learning law for one-dimensional topology preserving neural networks is presented in which the output weights of the neural network converge to a set that produces a predefined winning neuron coordinate probability distribution, when the probability density function of the input signal is unknown and not necessarily uniform. The learning algorithm also produces an orientation preserving homeomorphic function from the known neural coordinate domain to the unknown input signal space, which maps a predefined neural coordinate probability density function into the unknown probability density function of the input signal. The convergence properties of the proposed learning algorithm are analyzed using the ODE approach and verified by a simulation study.

Original languageEnglish
Article numberWeC06.4
Pages (from-to)1343-1350
Number of pages8
JournalProceedings of the American Control Conference
Volume2
Publication statusPublished - 2005
Event2005 American Control Conference, ACC - Portland, OR, United States
Duration: 2005 Jun 82005 Jun 10

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Topology preserving neural networks that achieve a prescribed feature map probability density distribution'. Together they form a unique fingerprint.

Cite this