Dimension expansion of neural networks

Eunsuk Jung, Chulhee Lee

Research output: Contribution to conferencePaperpeer-review

4 Citations (Scopus)


In this paper, we investigate the dimension expansion property of 3 layer feedforward neural networks and provide a helpful insight into how neural networks define complex decision boundaries. First, we note that adding a hidden neuron is equivalent to expanding the dimension of the space defined by the outputs of the hidden neurons. Thus, if the number of hidden neurons is larger than the number of inputs, the input data will be warped into a higher dimensional space. Second, we will show that the weights between the hidden neurons and the output neurons always define linear boundaries in the hidden neuron space. Consequently, the input data is first mapped non-linearly into a higher dimensional space and divided by linear planes. Then the linear decision boundaries in the hidden neuron space will be warped into complex decision boundaries in the input space.

Original languageEnglish
Number of pages3
Publication statusPublished - 2000
Event2000 International Geoscience and Remote Sensing Symposium (IGARSS 2000) - Honolulu, HI, USA
Duration: 2000 Jul 242000 Jul 28


Other2000 International Geoscience and Remote Sensing Symposium (IGARSS 2000)
CityHonolulu, HI, USA

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Earth and Planetary Sciences(all)


Dive into the research topics of 'Dimension expansion of neural networks'. Together they form a unique fingerprint.

Cite this