AI Accelerators for Standalone Computer

Taewoo Kim, Junyong Lee, Hyeonseong Jung, Shiho Kim

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Deep learning has become ubiquitous with many sensors and edge devices. In many situations, it needs to process data in real-time consistently. Recently, cloud-based AI systems reached the limit due to the bandwidth. Therefore, standalone computing, which transfers AI calculations to edge devices, is a solution to reduce the required traffic between data generator and processor. However, real-time deep learning inference at the edge has high computing power and memory demands. This recent AI system trend has rapidly raised the system’s power consumption. Many market leaders and researchers have designed integrated systems to reduce power consumption, including GPU-based acceleration APIs or specialized AI accelerators. In the beginning, Intel introduced a new type of coprocessor in 1989. Nvidia started building the CUDA ecosystem afterward. Finally, many IT companies introduce their own devices and APIs. These solutions enable high speed and performance, better security mechanisms, scalability, and better data management.

Original languageEnglish
Title of host publicationArtificial Intelligence and Hardware Accelerators
PublisherSpringer International Publishing
Pages53-93
Number of pages41
ISBN (Electronic)9783031221705
ISBN (Print)9783031221699
DOIs
Publication statusPublished - 2023 Jan 1

Bibliographical note

Publisher Copyright:
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.

All Science Journal Classification (ASJC) codes

  • General Engineering
  • General Computer Science

Fingerprint

Dive into the research topics of 'AI Accelerators for Standalone Computer'. Together they form a unique fingerprint.

Cite this