Side-channel analysis (SCA) utilizing the power consumption of a device has proved to be an efficient technique for recovering secret keys exploiting the implementation vulnerability of mathematically secure cryptographic algorithms. Recently, Deep Learning-based profiled SCA (DL-SCA) has gained popularity, where an adversary trains a deep learning model using profiled traces obtained from a dummy device (a device that is similar to the target device) and uses the trained model to retrieve the secret key from the target device. However, for efficient key recovery from the target device, training of such a model requires a large number of profiled traces from the dummy device and extensive training time. In this paper, we propose TranSCA, a new DL-SCA strategy that tries to address the issue. TranSCA works in three steps – an adversary (1) performs a one-time training of a base model using profiled traces from any device, (2) fine-tunes the parameters of the base model using significantly less profiled traces from a dummy device with the aid of transfer learning strategy in lesser time than training from scratch, and (3) uses the fine-tuned model to attack the target device. We validate TranSCA on simulated power traces created to represent different FPGA families. Experimental results show that the transfer learning strategy makes it possible to attack a new device from the knowledge of another device even if the new device belongs to a different family. Also, TranSCA requires very few power traces from the dummy device compared to when applying DL-SCA without any previous knowledge.