Abstract
Owing to remarkable improvements in deep neural networks (DNNs), various computation-intensive and delay-sensitive DNN services have been developed for smart IoT devices. However, employing these services on the devices is challenging due to their limited battery capacity and computational constraints. Although edge computing is proposed as a solution, edge devices cannot meet the performance requirements of DNN services because the majority of IoT applications require simultaneous inference services, and DNN models grow larger. To address this problem, we propose a framework that enables parallel execution of partitioned and offloaded DNN inference services over multiple distributed edge devices. Noteworthy, edge devices are reluctant to process tasks due to their energy consumption. Thus, to provide an incentive mechanism for edge devices, we model the interaction between the edge devices and DNN inference service users as a two-level Stackelberg game. Based on this model, we design the proposed framework to determine the optimal scheduling with a partitioning strategy, aiming to maximize user satisfaction while incentivizing the participation of edge devices. We further derive the Nash equilibrium points in the two levels. The simulation results show that the proposed scheme outperforms other benchmark methods in terms of user satisfaction and profits of edge devices.
Original language | English |
---|---|
Pages (from-to) | 1580-1592 |
Number of pages | 13 |
Journal | IEEE Transactions on Services Computing |
Volume | 17 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 2008-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications
- Information Systems and Management