You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Search the issue tracker to check if your feature has already been mentioned or rejected in other issues.
Describe the feature
Background
Readout errors are caused by imperfect qubit measurement, and are a common source of error in quantum computing. Properly modeling these errors in simulation can give the user tools to better understand how these errors when running on actual quantum devices. There is a large body of work studying readout error, particularly in the context of how to mitigate it [1], [2].
The implementation should include an independent qubit measurement error model, where only two parameters are needed from the user (the probability that 0 is measured as 1, and the probability that 1 is measured as 0), and applied to each measurement.
The implementation should also include a full multi-qubit error model, where the full confusion matrix is used as input, which describes how each possible bit-string output maps to the erroneous bit-strings. The readout error can be applied by multiplying the noise-free measurements after sample has been called, and the simulation has finished.
Some things to note: this implementation would at first just target sample, and will apply to any simulator. This is different from the Kraus operator noise modeling which currently only applies to the density matrix simulation.
Required prerequisites
Describe the feature
Background
Readout errors are caused by imperfect qubit measurement, and are a common source of error in quantum computing. Properly modeling these errors in simulation can give the user tools to better understand how these errors when running on actual quantum devices. There is a large body of work studying readout error, particularly in the context of how to mitigate it [1], [2].
Description
Expand the CUDA-Q
NoiseModel
(https://github.com/NVIDIA/cuda-quantum/blob/main/runtime/common/NoiseModel.h) to include readout errors.The implementation should include an independent qubit measurement error model, where only two parameters are needed from the user (the probability that 0 is measured as 1, and the probability that 1 is measured as 0), and applied to each measurement.
The implementation should also include a full multi-qubit error model, where the full confusion matrix is used as input, which describes how each possible bit-string output maps to the erroneous bit-strings. The readout error can be applied by multiplying the noise-free measurements after
sample
has been called, and the simulation has finished.This feature requested first in issue #891.
References
[1] https://mitiq.readthedocs.io/en/stable/guide/rem-5-theory.html
[2] https://arxiv.org/pdf/2006.14044
The text was updated successfully, but these errors were encountered: