Geometry of decision boundaries of neural networks

Chulhee Lee, Ohjae Kwon, Eunsuk Jung

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)


In this paper, we provide a thorough analysis of decision boundaries of neural networks when they are used as a classifier. It has been shown that the classifying mechanism of the neural network can be divided into two parts: dimension expansion by hidden neurons and linear decision boundary formation by output neurons. In this paradigm, the input data is first warped into a higher dimensional space by the hidden neurons and the output neurons draw linear decision boundaries in the expanded space (hidden neuron space). We also note that the decision boundaries in the hidden neuron space are not completely independent. This dependency of decision boundaries is extended to multiclass problems, providing a valuable insight into formation of decision boundaries in the hidden neuron space. This analysis provides a new understanding of how neural networks construct complex decision boundaries and explains how different sets of weights may provide similar results.

Original languageEnglish
Pages (from-to)167-179
Number of pages13
JournalProceedings of SPIE - The International Society for Optical Engineering
Publication statusPublished - 2001
EventAlgorithms and Systems for Optical Information Processing - San Diego, CA, United States
Duration: 2001 Jul 312001 Aug 2

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering


Dive into the research topics of 'Geometry of decision boundaries of neural networks'. Together they form a unique fingerprint.

Cite this