Author: Sanjaya Lohani
*Please report bugs at slohani@mlphys.com
Thanks to Brian T. Kirby, Ryan T. Glasser, Sean D. Huver and Thomas A. Searles
Preprint:
Lohani, S., Lukens, J.M., Glasser, R.T., Searles, T.A. and Kirby, B.T., 2022. Data-Centric Machine Learning in Quantum Information Science. arXiv preprint arXiv:2201.09134.
pip install mlphys
import mlphys.deepqis.simulator.distributions as dist
import mlphys.deepqis.simulator.measurements as meas
import mlphys.deepqis.utils.Alpha_Measure as find_alpha
import mlphys.deepqis.utils.Concurrence_Measure as find_con
import mlphys.deepqis.utils.Purity_Measure as find_pm
import mlphys.deepqis.network.Inference as inference
import mlphys.deepqis.utils.Fidelity_Measure as fm
...
For examples (google colab), please refer to
-
Reducing spurious correlations:
- Accuracy of entanglement-separability classification - Fig 2 (a)
- network reconstruction fidelity versus the percentage of separable states added to a training set containing entangled states - Fig 2 (b)
- Reconstruction fidelity for test states from the MA distribution for a MEMS-only trained network and after adding a small fraction of separable states into the training set - Fig 2 (c, d)
-
Reconstruction fidelity versus number of trainable parameters for various training set distributions:
- Data-centric approach (Fidelity versus trainable parameters) - Fig 3 (a)
- The concurrence and purity of random quantum states from the Hilbert–Schmidt–Haar (HS–Haar), Zyczkowski (Z), ˙ engineered, and IBM Q distributions - Fig 3 (a) insets
-
Engineered states on concurrence-purity plane:
- The engineered and IBM Q sets - Fig 3 (b)
-
Data-centric approach in the low-shot regime:
- Reconstructing the NISQ-sampled distribution with simulated measurements performed with shots ranging from 128 to 8192 - Fig 4
-
Heterogeneous state complexity:
-
Two-qubits
- Reconstruction fidelities versus test state purity - Fig 5 (a) bottom
- Test MA distribution - Fig 5 (a) top
- Fidelity versus K parameter - Fig 5 (a) inset
- Zoomed in at the crossing point - Fig 5 (a) inset
-
Three-qubits
- Reconstruction fidelities versus test state purity - Fig 5 (b) bottom
- Test MA distribution - Fig 5 (b) top
- Fidelity versus K parameter - Fig 5 (b) inset
- Zoomed in at the crossing point - Fig 5 (b) inset
-
Four-qubits
- Reconstruction fidelities versus test state purity - Fig 5 (c) bottom
- Test MA distribution - Fig 5 (c) top
- Fidelity versus K parameter - Fig 5 (c) inset
- Zoomed in at the crossing point - Fig 5 (c) inset
-
-
Optimizing learning rate:
- Fidelity of reconstructed density matrices versus learning rate - Extended Data Fig 1
- The full purity distributions of the reconstructed states - Extended Data Fig 1 inset
-
Engineered states:
- Unfiltered - Extended Data Fig. 2 (a) left
- Engineered - Extended Data Fig. 2 (a) right
- Reconstruction fidelities versus the value of K - Fig 7 (b)
-
Reconstruction fidelity of NISQ-sampled test set versus the mean purity of various MA-distributed training states when K = 4.
- The mean purity of the training set matches the minimum and mean purity of the NISQ sampled states when D = K = 4 - Extended Data Fig. 3
-
Reconstruction fidelity versus trainable parameters for various MA-distributed training sets:
- The pairs of concentration parameter and K-value are chosen as (α, K) ∈ {(0.01, 4),(0.1, 4),(0.3, 4),(0.8, 4),(0.3394, 6)} for training sets - Extended Data Fig. 4