Deep Neural Network Acceleration Framework Under Hardware Uncertainty

Mohsen Imani1, Pushen Wang1, Tajana Rosing2
1University of California San Diego, 2UCSD


Abstract

Deep Neural Networks (DNNs) are known as effective model to perform cognitive tasks. However, DNNs are computationally expensive in both train and inference modes as they require the precision of floating point operations. Although, several prior work proposed approximate hardware to accelerate DNNs inference, they have not considered the impact of training on accuracy. In this paper, we propose a general framework called FramNN , which adjusts DNN training model to make it appropriate for underlying hardware. To accelerate training FramNN applies adaptive approximation which dynamically changes the level of hardware approximation depending on the DNN error rate. We test the efficiency of the proposed design over six popular DNN applications. Our evaluation shows that in inference, our design can achieve 1.9× energy efficiency improvement and 1.7× speedup while ensuring less than 1% quality loss. Similarly, in training mode FramNN can achieve 5.0× energy-delay product improvement as compared to baseline AMD GPU.