State of Hardware Fuzzing: Current Methods and the Potential of Machine Learning and Large Language Models

Kevin Immanuel Gubbi1, Mohammadnavid Tarighat2, Arvind Sudarshan2, Inderpreet Kaur1, Pavan Dheeraj Kota2, Avesta Sasan1, Houman Homayoun2
1University of California, Davis, 2University of California Davis


Abstract

Hardware fuzzing has emerged as a powerful technique for detecting security vulnerabilities and functional bugs in modern hardware systems. Unlike traditional verification approaches that rely on predefined testbenches and formal proofs, hardware fuzzing generates and mutates inputs dynamically to uncover unexpected behaviors. Despite its effectiveness, hardware fuzzing faces challenges such as test case explosion, coverage limitations, and debugging complexity. Recent advancements in Machine Learning (ML) and Large Language Models (LLMs) offer new opportunities to enhance hardware fuzzing by improving test case generation, optimizing coverage feedback, and automating debugging processes. This paper provides a comprehensive survey of the current state of hardware fuzzing, highlighting its methodologies, applications, and limitations. Furthermore, we explore the potential of ML and LLMs in augmenting fuzzing workflows and discuss key challenges that must be addressed for broader adoption. By synthesizing insights from existing research and industry practices, we outline future research directions that can bridge the gap between automated hardware fuzzing and intelligent, adaptive testing frameworks.