This paper reports a runtime accuracy reconfigurable implementation of an energy efficient deep learning accelerator. The accelerator utilizes the voltage overscaling (VOS) technique, which provides adjustment of the approximation level of the hardware components while improving lifetime/reliability of the accelerator. The technique is targeted at computing units where the applied voltage is adjusted at runtime based on the minimum required accuracy. The implementation of the network is performed using NVDLA which is an open-source Convolutional Neural Network (CNN) accelerator. The approximation is applied to the accelerator multiply-and-accumulate (MAC) array employed for the required network computations. To control the accuracy degradation of the approximate accelerator (called XNVDLA), the reduced voltage is applied only to least significant bits (LSBs) of the MAC array. To assess the efficacy of the proposed energy efficient accelerator, the energy-accuracy characteristics of X-NVDLA when running LeNet-5 and ResNet50 networks with 8-bit (integer) precision are investigated. The study includes energy improvement versus accuracy degradation as a function of overscaled voltages and number of approximate LSBs using a 15nm FinFET technology.