CNN Hardware Accelerator Architecture Design for Energy-Efficient AI

Jaekwang Cha, Shiho Kim

Research output: Chapter in Book/Report/Conference proceedingChapter

3 Citations (Scopus)

Abstract

Reducing the energy consumption of deep neural network hardware accelerator is critical to democratizing deep learning technology. This chapter introduces the AI accelerator design considerations for alleviating the AI accelerator’s energy consumption issue, including the metrics for evaluating the AI accelerator. These design considerations mainly specialize in accelerating convolutional neural network (CNN) architecture, the most dominant DNN architecture nowadays. Most energy-efficient AI accelerating methods covered in this chapter are categorized into approximation and optimization techniques. The target is to reduce the number of multiplication or the memory footprint by modifying the multiplication and accumulation (MAC) operation or dataflow to make the AI accelerator more energy-efficient and lightweight.

Original languageEnglish
Title of host publicationArtificial Intelligence and Hardware Accelerators
PublisherSpringer International Publishing
Pages319-357
Number of pages39
ISBN (Electronic)9783031221705
ISBN (Print)9783031221699
DOIs
Publication statusPublished - 2023 Jan 1

Bibliographical note

Publisher Copyright:
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.

All Science Journal Classification (ASJC) codes

  • General Engineering
  • General Computer Science

Fingerprint

Dive into the research topics of 'CNN Hardware Accelerator Architecture Design for Energy-Efficient AI'. Together they form a unique fingerprint.

Cite this