Nvidia has introduced new open-weights AI models aimed at reducing error rates in quantum computing hardware, according to a report from go.theregister.com.
The chipmaker is targeting the massive stability gap in current quantum systems. While quantum computers promise breakthroughs in materials science and finance, even high-end systems currently produce roughly one error for every thousand operations.
To make these systems viable, Nvidia claims error rates must eventually drop by a factor of one billion. The company's new software suite uses machine learning to bridge this reliability gap.
Automated quantum calibration
The first release, a 35 billion-parameter vision-language model codenamed Ising Calibration, focuses on hardware optimization. The model was trained on data from partner systems to help developers find the ideal settings to minimize system noise.
Nvidia suggests this model could function within an agentic framework to automate the calibration process. By streaming real-time data, the system could make continuous adjustments until error rates fall below specific thresholds, a process the company describes as "quantum autotune."
Because the model is relatively lightweight, it can run on existing hardware like the RTX Pro 6000 Blackwell or the DGX Spark system.
While calibration reduces the frequency of errors, Nvidia's Ising Decoding models aim to manage the errors that do occur. These models use a convolutional neural network (CNN) architecture to detect and correct errors in real time.
The decoding models are significantly smaller than the calibration model. The Ising-Decoder-SurfaceCode-1 version uses 912,000 parameters, while the larger "Accurate" model utilizes 1.79 million parameters.
According to the report, these tiny models can catch errors between 2.25 and 2.5 times faster than conventional frameworks like PyMatching.
Nvidia has made the weights for Ising Calibration 1 and Ising Decoder SurfaceCode 1 available on Hugging Face. The company is also providing training frameworks to help developers generate synthetic data and fine-tune the models for specific quantum architectures.