An Efficient Approach to Model Strong PUF with Multi-Layer Perceptron using Transfer Learning

Amir Alipour1, David Hély2, Vincent Beroulle2, Giorgio Di Natale3
1Universite Grenoble Alpes - LCIS, 2Grenoble INP LCIS, 3CNRS TIMA


The study to increase the capability to model strong Physically Unclonable Functions (PUFs) has been a trend recently in the field of Cryptography and Hardware Security. The race between the increasing complexity of strong PUF structures and the increasing capability of modeling strong PUFs with fewer resources for training is still ongoing. In this work, we evaluate a new technique to use Transfer Learning to model strong delay-based PUF using Multi-Layer Perceptron (MLP) as the probabilistic model. Transfer Learning has been already proposed for modeling strong PUF with Convolutional Neural Networks (CNNs). Here we propose Transfer Learning for MLP, since MLP models are relatively less complex and can be trained potentially with less training data. We exploit the reusability of weight values in the hidden dense layers of an MLP model in an existing domain, to further decrease the required resources in training an MLP in another domain. Here a domain represents the the CRP space of a given strong PUF instance. We support our proposed Transfer Learning method with simulated data of some variants of XOR Arbiter PUF. We show that our proposed method can reduce the required number of CRPs by approximately 50% compared to modeling the same MLP with random initialization.