Deep Partitioned Training From Near-Storage Computing to DNN Accelerators

Yongjoo Jang, Sejin Kim, Daehoon Kim, Sungjin Lee, and Jaeha Kung. IEEE Computer Architecture Letters (CAL) vol. 20, no. 1, pp. 70-73, 2021

Abstract

In this letter, we present deep partitioned training to accelerate computations involved in training DNN models. This is the first work that partitions a DNN model across storage devices, an NPU and a host CPU forming a unified compute node for training workloads. To validate the benefit of using the proposed system during DNN training, a trace-based simulator or an FPGA prototype is used to estimate the overall performance and obtain the layer index to be partitioned that provides the minimum latency. As a case study, we select two benchmarks, i.e., vision-related tasks and a recommendation system. As a result, the training time reduces by 12.2 ~ 31.0 percent with four near-storage computing devices in vision-related tasks with a mini-batch size of 512 and 40.6 ~ 44.7 percent with one near-storage computing device in the selected recommendation system with a mini-batch size of 64.

Keywords

Deep partitioned training, Near-storage computing, DNN accelerators, Model partitioning, Neural Processing Unit, DNN training acceleration, FPGA prototype.

Related Research Topics

Artificial Intelligent Systems & Architecture