Abstract
In this paper, we provide a thorough analysis of decision boundaries of neural networks when they are used as a classifier. It has been shown that the classifying mechanism of the neural network can be divided into two parts: dimension expansion by hidden neurons and linear decision boundary formation by output neurons. In this paradigm, the input data is first warped into a higher dimensional space by the hidden neurons and the output neurons draw linear decision boundaries in the expanded space (hidden neuron space). We also note that the decision boundaries in the hidden neuron space are not completely independent. This dependency of decision boundaries is extended to multiclass problems, providing a valuable insight into formation of decision boundaries in the hidden neuron space. This analysis provides a new understanding of how neural networks construct complex decision boundaries and explains how different sets of weights may provide similar results.
Original language | English |
---|---|
Pages (from-to) | 167-179 |
Number of pages | 13 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 4471 |
DOIs | |
Publication status | Published - 2001 |
Event | Algorithms and Systems for Optical Information Processing - San Diego, CA, United States Duration: 2001 Jul 31 → 2001 Aug 2 |
All Science Journal Classification (ASJC) codes
- Electronic, Optical and Magnetic Materials
- Condensed Matter Physics
- Computer Science Applications
- Applied Mathematics
- Electrical and Electronic Engineering