Abstract
This paper proposes a novel approach to train deep neural networks by unlocking the layer-wise dependency of backpropagation training. The approach employs additional modules called local critic networks besides the main network model to be trained, which are used to obtain error gradients without complete feedforward and backward propagation processes. We propose a cascaded learning strategy for these local networks. In addition, the approach is also useful from multi-model perspectives, including structural optimization of neural networks, computationally efficient progressive inference, and ensemble classification for performance improvement. Experimental results show the effectiveness of the proposed approach and suggest guidelines for determining appropriate algorithm parameters.
Original language | English |
---|---|
Title of host publication | 2019 International Joint Conference on Neural Networks, IJCNN 2019 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9781728119854 |
DOIs | |
Publication status | Published - 2019 Jul |
Event | 2019 International Joint Conference on Neural Networks, IJCNN 2019 - Budapest, Hungary Duration: 2019 Jul 14 → 2019 Jul 19 |
Publication series
Name | Proceedings of the International Joint Conference on Neural Networks |
---|---|
Volume | 2019-July |
Conference
Conference | 2019 International Joint Conference on Neural Networks, IJCNN 2019 |
---|---|
Country/Territory | Hungary |
City | Budapest |
Period | 19/7/14 → 19/7/19 |
Bibliographical note
Publisher Copyright:© 2019 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Artificial Intelligence