Hardware accelerator for training with integer backpropagation and probabilistic weight update

Hyunbin Park, Shiho Kim

Research output: Chapter in Book/Report/Conference proceedingChapter

3 Citations (Scopus)

Abstract

Advances in the architecture of inference accelerators and quantization techniques of neural networks allow effective on-device inference in embedded devices. Privacy issues for user data, as well as increasing needs of user-specific services, have led to a need for on-device training. The dot product operation required in backpropagation can be computed efficiently by multiplier–accumulators (MACs) in the inference accelerator if forward and backward propagation of the neural network have the same precision. This chapter introduces a quantization technique to enable computation by the digital neuron inference accelerator with the same precision as that using the forward path. Updating the 5-bit weights with gradients of higher precision is challenging. To address this issue, this chapter also introduces a probabilistic weight update. It also describes the hardware implementation of the probabilistic weight-update scheme. The proposed training technique achieves 98.15% recognition accuracy on the MNIST dataset.

Original languageEnglish
Title of host publicationHardware Accelerator Systems for Artificial Intelligence and Machine Learning
EditorsShiho Kim, Ganesh Chandra Deka
PublisherAcademic Press Inc.
Pages343-365
Number of pages23
ISBN (Print)9780128231234
DOIs
Publication statusPublished - 2021 Jan

Publication series

NameAdvances in Computers
Volume122
ISSN (Print)0065-2458

Bibliographical note

Publisher Copyright:
© 2021 Elsevier Inc.

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'Hardware accelerator for training with integer backpropagation and probabilistic weight update'. Together they form a unique fingerprint.

Cite this