High Performance Training of Deep Neural Networks Using Pipelined Hardware Acceleration and Distributed Memory

Raghav Mehta1, Yuyang Huang2, Mingxi Cheng3, Shrey Bagga4, Nishant Mathur4, Ji Li4, Jeffrey Draper5, Shahin Nazarian4
1Mentor, A Siemens Business, 2NVIDIA, 3Duke University, 4University Of Southern California, 5Information Sciences Institute


Abstract

Recently, Deep Neural Networks (DNNs) have made unprecedented progress in various tasks. However, there is a timely need to accelerate the training process in DNNs specifically for real-time application that demand high performance, energy efficiency and compactness. Numerous algorithms have been proposed to improve the accuracy, however the network training process is computationally slow. In this paper, we present a scalable pipelined hardware architecture with distributed memories for a digital neuron to implement deep neural networks. We also explore various functions and algorithms as well as different memory topologies, to optimize the performance of our training architecture. The power, area, and delay of our proposed model are evaluated with respect to software implementation. Experimental results on the MNIST dataset demonstrate that compared with the software training, our proposed hardwarebased approach for training process achieves 33X runtime reduction, 5X power reduction, and 168X energy reduction.

Keywords—Deep learning, neural network, hardware design.