Introducing approximation has shown significant benefits in the performance and throughput, besides lowering on-chip power consumption and silicon footprint requirement. Approximation in digital computing was designed and targeted towards error-resilient applications primarily involving image or signal processing modules. Previous works focus on approximating various arithmetic operator designs, including dividers, multipliers, adders, subtractors and multiply-and-accumulate units. Approximating compressor designs for multipliers was found to improve performance, power and area effectively. In addition, they offer regularity in cascading the partial product bits. Conventional multiplier designs employ compressors of the same kind throughout the partial product reduction stages, leading to the accumulation of errors. This paper proposes to utilize two different types of compressors: positive and negative compressors, subsequently in partial product reduction stages, with the intention to reduce the accumulated error. The proposed multiplier designs with appropriately placed positive and negative compressors along the stages and columns of the Partial Product Matrix (PPM) are investigated and characterized for hardware and error metrics. These designs were further evaluated for Image smoothing and Convolutional Neural Network (CNN) applications. The CNN built for four datasets using proposed approximate multipliers demonstrated comparable accuracy to that of exact multiplier-based CNN in the Lenet-5 architecture.