Improving Reliability of Quantum True Random Number Generator using Machine Learning

Abdullah Ash- Saki, Mahabubul Alam, Swaroop Ghosh
Pennsylvania State University


Abstract

Quantum computer (QC) can be used as a true random number generator (TRNG). However, various noise sources introduce a bias in the generated number which affects the randomness. In this work, we analyze the impact of noise sources e.g., gate error, decoherence, and readout error in QC-based TRNG by running a set of error calibration and quantum tomography experiments. We employ a hybrid quantum-classical gate parameter optimization routine to compensate for error-induced bias, and to improve the quality of the random number generation by exploiting even worst quality qubits. Search for the optimal parameter in hybrid setup requires time-consuming iterations between classical and quantum machines. We propose a machine learning model to predict optimal quantum gate parameters based on the qubit error specifications. We validate our approach using experimental results from IBM's publicly accessible quantum computers and the NIST statistical test suite. The proposed method can correct bias in any worst-case qubit by up to 88.57% in real quantum hardware.