Abstract
In this paper, we investigate the dimension expansion property of 3 layer feedforward neural networks and provide a helpful insight into how neural networks define complex decision boundaries. First, we note that adding a hidden neuron is equivalent to expanding the dimension of the space defined by the outputs of the hidden neurons. Thus, if the number of hidden neurons is larger than the number of inputs, the input data will be warped into a higher dimensional space. Second, we will show that the weights between the hidden neurons and the output neurons always define linear boundaries in the hidden neuron space. Consequently, the input data is first mapped non-linearly into a higher dimensional space and divided by linear planes. Then the linear decision boundaries in the hidden neuron space will be warped into complex decision boundaries in the input space.
Original language | English |
---|---|
Pages | 678-680 |
Number of pages | 3 |
Publication status | Published - 2000 |
Event | 2000 International Geoscience and Remote Sensing Symposium (IGARSS 2000) - Honolulu, HI, USA Duration: 2000 Jul 24 → 2000 Jul 28 |
Other
Other | 2000 International Geoscience and Remote Sensing Symposium (IGARSS 2000) |
---|---|
City | Honolulu, HI, USA |
Period | 00/7/24 → 00/7/28 |
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Earth and Planetary Sciences(all)