Deep Neural Networks (DNNs) based on Carbon nanotube field effect transistor (CNFET) technology can leverage the potential energy benefits of CNFET based technology in comparison to conventional Si technology. However, like other emerging materials based technologies, the current fabrication processes for CNFETs lack the quality, resulting in CNFETs suffering from process imperfections, consequently degradation in circuit-level performance. Such imperfections will cause timing failure and distort the shape of non-linear activation functions, which are vital in DNN, leading to significant degradation in classification accuracy. We utilize pruning of synaptic weights which combined with proposed approximate neuron circuit significantly reduces the chance of timing failure, and achieve better frequency of operation (speed), even using highly imperfect process. In our example, the proposed configuration with approximate neuron and pruning at a high imperfect process (PCNTopen = 40%), in comparison to base configuration of precise neuron and no pruning with ideal process (PCNTopen = 0%), achieves peak accuracy only 0.19% less, but significant energy-delay-product (EDP) advantage (56.7% less), at no area penalty.