Locating and pruning redundant neurons from deep neural networks (DNNs) is the focal point of DNN subnetwork search. Recent advance mainly targets at pruning neuron through heuristic
hard'' constraints or through penalizing neurons. However, these two methods heavily rely on expert knowledge in designing model-and-task-specific constraints and penalization, which prohibits easily applying pruning to general models. In this paper, we propose an automatic non-expert-friendly differentiable subnetwork search algorithm which dynamically adjusts the layer-wise neuron-pruning penalty based on sensitivity of Lagrangian multipliers. The idea is to introduce
soft'' neuron-cardinality layer-wise constraints and then relax them through Lagrangian multipliers. The sensitivity nature of the multipliers is then exploited to iteratively determine the appropriate pruning penalization hyper-parameters during the differentiable neuron pruning procedure. In this way, the model weight, model subnetwork and layer-wise penalty hyper-parameters are simultaneously learned, relieving the prior knowledge requirements and reducing the time for trail-and-error. Results show that our method can select the state-of-the-art slim subnetwork architecture. For VGG-like on CIFAR10, more than $6\times$ neuron compression rate is achieved without accuracy drop and without retraining. Accuracy rates of 66.3\% and 57.8\% are achieved for 150M and 50M FLOPs for MobileNetV1, and accuracy rates of 73.46\% and 66.94\% are achieved for 200M and 100M FLOPs for MobileNetV2, respectively.