Deep Neural Networks (DNNs) have been heavily utilized in modern applications, imposing substantial challenges on embedded devices with limited resources. In the past few years, approximate computing has gained a lot of research attention as a computing paradigm that achieves significant energy efficiency gains by relaxing the need for completely accurate DNN performance. In this work, we conduct an in-depth analysis of the impact of approximate multipliers on different DNN layers. We leverage the layer sensitivity information to build a matrix, and we apply collaborative filtering techniques to predict the accuracy degradation inflicted on a target DNN when selecting different approximate multipliers. This work aims to bridge the gap between approximate circuit design and DNN accuracy by providing useful insight into the impact of approximation error on DNN inference.