A Survey on Neural Trojans

Yuntao Liu1, Ankit Mondal2, Michael Zuzak1, Abhishek Chakraborty3, Nina Jacobsen3, Daniel Xing3, Ankur Srivastava3
1University of Maryland, College Park, 2University of Maryland, 3University of Maryland College Park


Abstract

Neural networks have become increasingly prevalent in many real-world applications including security critical ones. Due to the high hardware requirement and time consumption to train high-performance neural network models, users often outsource training to a machine-learning-as-a-service (MLaaS) provider. This puts the integrity of the trained model at risk. In 2017, Liu et al. found that, by mixing the training data with a few malicious samples of a certain trigger pattern, hidden functionality can be embedded in the trained network which can be evoked by the trigger pattern [33]. We refer to this kind of hidden malicious functionality as neural Trojans. In this paper, we survey a myriad of neural Trojan attack and defense techniques that have been proposed over the last few years. In a neural Trojan insertion attack, the attacker can be the MLaaS provider itself or a third party capable of adding or tampering with training data. In most research on attacks, the attacker selects the Trojan’s functionality and a set of input patterns that will trigger the Trojan. Training data poisoning is the most common way to make the neural network acquire the Trojan functionality. Trojan embedding methods that modify the training algorithm or directly interfere with the neural network’s execution at the binary level have also been studied. Defense techniques include detecting neural Trojans in the model and/or Trojan trigger patterns, erasing the Trojan’s functionality from the neural network model, and bypassing the Trojan. It was also shown that carefully crafted neural Trojans can be used to mitigate other types of attacks. We systematize the above attack and defense approaches in this paper.