Exploring Model Poisoning Attack to Convolutional Neural Network Based Brain Tumor Detection Systems

Kusum Lata1, Prashant Singh2, Sandeep Saini2
1LNMIIT Jaipur, 2The LNM Institute of Information Technology, Jaipur


With the growth of artificial intelligence (AI) in many fields, convolutional neural networks, or CNNs, have become more and more popular. But worries regarding CNN-based systems' security have surfaced as AI is used more and more. A major threat to these systems comes from sophisticated attacks that go for the heart of artificial intelligence (AI), introducing harmful alterations at several points throughout the Integrated Circuit (IC) supply chain. These alterations can take many different forms, from little circuitry tweaks to secret feature additions, and they can happen at any stage of the design, manufacturing, or testing process. Their goal is to compromise the integrity, functioning, or security of integrated circuits. In this study, we conducted a preliminary investigation to gauge how a poisoned pooling layer impacts a CNN-based brain tumor detection system with MRI data. Using brain tumor detection as a case study, we aimed to assess the attack's impact on the model's accuracy in image classification. Our findings revealed a significant decrease in accuracy, with ResNet-50 and Inception V3 models experiencing reductions of up to 45.42% and 14.61%, respectively, highlighting the adverse impact of the inserted trojan on model performance. This research serves as an initial step in evaluating the vulnerabilities of brain tumor detection systems and exploring potential mitigation strategies to enhance their resilience against such model poisoning attacks