Abstract
It has been shown that a neural network is better than induction tree applications in modeling complex relations of input attributes in sample data. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. A linear classifier is derived from a linear combination of input attributes and neuron weights in the first hidden layer of neural networks. Training data are projected onto the set of linear classifier hyperplanes and then information gain measure is applied to the data. We propose that this can reduce computational complexity to extract rules from neural networks. As a result, concise rules can be extracted from neural networks to support data with input variable relations over continuous-valued attributes.
Original language | English |
---|---|
Title of host publication | Artificial Neural Networks - ICANN 2001 - International Conference, Proceedings |
Editors | Kurt Hornik, Georg Dorffner, Horst Bischof |
Publisher | Springer Verlag |
Pages | 1193-1198 |
Number of pages | 6 |
ISBN (Print) | 3540424865, 9783540446682 |
DOIs | |
Publication status | Published - 2001 |
Event | International Conference on Artificial Neural Networks, ICANN 2001 - Vienna, Austria Duration: 2001 Aug 21 → 2001 Aug 25 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 2130 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Other
Other | International Conference on Artificial Neural Networks, ICANN 2001 |
---|---|
Country/Territory | Austria |
City | Vienna |
Period | 01/8/21 → 01/8/25 |
Bibliographical note
Publisher Copyright:© Springer-Verlag Berlin Heidelberg 2001.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Computer Science(all)