Rethinking Latency-Aware DNN Design With GPU Tail Effect Analysis
As the size of Deep Neural Networks (DNNs) continues to grow, their runtime latency also scales. While model pruning and Neural Architecture Search (NAS) can effectively reduce the computation workload, their effectiveness fails to consistently translate into runtime latency reduction. In this paper, we identify the root cause behind the mismatch between workload reduction and latency reduction is GPU tail effect – a classic system issue caused by resource under-utilization in the last processing wave of the GPU. We conduct detailed DNN workload characterization and demonstrate the prevalence of GPU tail effect across different DNN architectures, and meanwhile reveal that the unique deep structure and the light-weight layer workload of DNNs exacerbate the tail effect for DNN inference. We then propose a tail-awareness design space enhancement and DNN optimization algorithm to optimize existing NAS and pruning designs and achieve better runtime latency and model accuracy performance. Extensive experiments show 11%-27% latency reduction over SOTA DNN pruning and NAS methods.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Related Subject Headings
- Computer Hardware & Architecture
- 4607 Graphics, augmented reality and games
- 4009 Electronics, sensors and digital hardware
- 1006 Computer Hardware
- 0906 Electrical and Electronic Engineering
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Related Subject Headings
- Computer Hardware & Architecture
- 4607 Graphics, augmented reality and games
- 4009 Electronics, sensors and digital hardware
- 1006 Computer Hardware
- 0906 Electrical and Electronic Engineering