Abstract
Reducing the energy consumption of deep neural network hardware accelerator is critical to democratizing deep learning technology. This chapter introduces the AI accelerator design considerations for alleviating the AI accelerator’s energy consumption issue, including the metrics for evaluating the AI accelerator. These design considerations mainly specialize in accelerating convolutional neural network (CNN) architecture, the most dominant DNN architecture nowadays. Most energy-efficient AI accelerating methods covered in this chapter are categorized into approximation and optimization techniques. The target is to reduce the number of multiplication or the memory footprint by modifying the multiplication and accumulation (MAC) operation or dataflow to make the AI accelerator more energy-efficient and lightweight.
Original language | English |
---|---|
Title of host publication | Artificial Intelligence and Hardware Accelerators |
Publisher | Springer International Publishing |
Pages | 319-357 |
Number of pages | 39 |
ISBN (Electronic) | 9783031221705 |
ISBN (Print) | 9783031221699 |
DOIs | |
Publication status | Published - 2023 Jan 1 |
Bibliographical note
Publisher Copyright:© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
All Science Journal Classification (ASJC) codes
- General Engineering
- General Computer Science