kNN-CAM: A k-Nearest Neighbors-based Configurable Approximate Floating Point Multiplier

Ming Yan1, Yuntao Song2, Yiyu Feng2, Ghasem Pasandi3, Massoud Pedram2, Shahin Nazarian2
1University of Southern California, Ming Hsieh Department of Electrical Engineering, 2USC, 3University of Southern California


Abstract

Abstract—In many real computations such as arithmetic op- erations in the hidden layers of a neural network, some amounts of inaccuracies can be tolerated without degrading the final results (e.g., to maintain the same level of accuracy for image classification). This paper presents design of kNN-CAM, a k-Nearest Neighbors (kNN)-based Configurable Approximate float- ing point Multiplier. kNN-CAM utilizes approximate computing opportunities that result in significant area and energy savings. kNN-CAM first applies a kNN algorithm to determine the tolerable level of approximation. The kNN engine is trained on a sufficiently large set of input data to learn the amount of bit truncation that can be performed in each floating point input with the goal of minimizing energy-area product. Experimental results show that kNN-CAM provides about 67% area saving and 19% speedup while losing only 4.86% accuracy when compared with a 100% accurate multiplier. Finally, usage of kNN-CAM in implementation of a handwritten digit recognition provides 47.2% area saving while the accuracy is dropped by only 0.3%.