Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Bell inequality upper bound for qubits #563

Open
vprusso opened this issue Apr 21, 2024 · 0 comments
Open

Feature: Bell inequality upper bound for qubits #563

vprusso opened this issue Apr 21, 2024 · 0 comments
Labels
feature request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@vprusso
Copy link
Owner

vprusso commented Apr 21, 2024

I have an implementation that is close to being complete for upper bounding the value of a qubit Bell inequality. The logic is heavily derived from the nice implementation here in QETLAB.

import cvxpy as cp
from scipy.sparse import eye
from itertools import combinations
from toqito.perms import permutation_operator, swap
from toqito.channels import partial_transpose

def MN_matrix(m: int, a: int, x: int) -> np.ndarray:
    """
    Computes the matrices M_a^x and N_b^y.
    
    Args:
        m: The number of measurement settings for Alice and Bob.
        a: The specific measurement setting for Alice.
        x: The specific measurement setting for Bob.
    
    Returns:
        The computed matrix.
    """
    # Create a permutation list as done in MATLAB
    perm = list(range(1, m+2))  # MATLAB indices start from 1
    perm[0] = x + 1
    perm[x] = 1

    # Calculate the matrix as in the MATLAB code
    MN = a * np.eye(2**(m+1)) + ((-1)**a) * permutation_operator([2] * (m+1), perm, 0, 1)
    
    return MN

def bell_inequality_max(joint_coe, a_coe, b_coe, a_val, b_val):
    m, _ = joint_coe.shape
    oa = len(a_val)
    ob = len(b_val)

    # Ensure the input vectors are column vectors
    a_val = a_val.reshape(-1, 1)
    b_val = b_val.reshape(-1, 1)
    a_coe = a_coe.reshape(-1, 1)
    b_coe = b_coe.reshape(-1, 1)

    # Check if vectors a_val and b_val have only two elements
    if oa != 2 or ob != 2:
        raise ValueError("This script is only capable of handling Bell inequalities with two outcomes.")

    tot_dim = 2 ** (2 * m + 2)
    obj_mat = np.zeros((tot_dim, tot_dim), dtype=float)

    # Nested loops to compute the objective matrix
    for a in range(2):  # a = 0 to 1
        for b in range(2):  # b = 0 to 1
            for x in range(1, m+1):  # x = 1 to m (1-indexed in MATLAB, hence the range adjustment)
                for y in range(1, m+1):  # y = 1 to m
                    b_coeff = joint_coe[x-1, y-1] * a_val[a, 0] * b_val[b, 0]  # Adjust index for 0-based Python indexing
                    if y == 1:
                        b_coeff += a_coe[x-1, 0] * a_val[a, 0]  # Adjust for 0-based indexing
                    if x == 1:
                        b_coeff += b_coe[y-1, 0] * b_val[b, 0]  # Adjust for 0-based indexing
                    
                    # Adding the result of the tensor product to the objective matrix
                    obj_mat += b_coeff * np.kron(MN_matrix(m, a, x), MN_matrix(m, b, y))

    # Symmetrize the matrix to avoid numerical issues
    obj_mat = (obj_mat + obj_mat.T) / 2
    aux_mat = np.array([[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]])

    W = cp.Variable((2**(2*m), 2**(2*m)), symmetric=True)

    M = swap(W, [2, m+1], [2] * (2*m))
    X = swap(cp.kron(M, aux_mat), [m+1, 2*m+1], [2] * (2*m+2))
    Z = swap(X, [m+2, 2*m+1], [2] * (2*m+2))

    objective = cp.Maximize(cp.trace(Z @ obj_mat))

    # Define the constraints
    constraints = [cp.trace(W) == 1, W >> 0]

    # Adding PPT constraints
    for sz in range(1, m):
        # Generate all combinations of indices from 1 to 2*m-1 of size sz
        for ppt_partition in combinations(range(1, 2 * m-1), sz):
            # Convert to 0-based indexing for Python
            ppt_partition = [x - 1 for x in ppt_partition]
            # Partial transpose on the partition, ensuring it's positive semidefinite
            pt_matrix = partial_transpose(W, ppt_partition, [4] + [2] * (2 * (m - 1)))
            constraints.append(pt_matrix >> 0)

    # Solve the problem
    prob = cp.Problem(objective, constraints)
    prob.solve(solver="MOSEK", verbose=True) 

    # Return the results
    rho = W.value
    bmax = prob.value

    return bmax

Indeed, following the example from this page, it is possible to replicate the answer for the I3322 Bell inequality:

# Inputs:
joint_coe = np.array([
    [1, 1, -1], 
    [1, 1, 1],
    [-1, 1, 0]
])
a_coe = np.array([0, -1, 0])
b_coe = np.array([-1, -2, 0])
a_val = np.array([0, 1])
b_val = np.array([0, 1])

print(bell_inequality_max(joint_coe, a_coe, b_coe, a_val, b_val))
>>> 0.25000

Great. However, it seems like this fails for another inequality (like something simple, i.e. the CHSH inequality):

# Inputs for CHSH
joint_coe = np.array([
    [1, 1],
    [1, -1]
])
a_coe = np.array([0, 0])  # No single-party terms for CHSH
b_coe = np.array([0, 0])

a_val = np.array([1, -1])
b_val = np.array([1, -1])

print(bell_inequality_max(joint_coe, a_coe, b_coe, a_val, b_val))
>>> 3.089617744060435

Which is wrong (it should be $2 \sqrt{2} \approx 2.84$).

This is close, but diagnosing the issue without a MATLAB license is complicated as this would allow us to compare.

However, this could also be an issue with how I encode CHSH. If I flip the order of the entries in a_val and b_val to:

a_val = np.array([1, -1])
b_val = np.array([-1, 1])

Then I get ~2.84 which is still incorrect (but closer).

@vprusso vprusso added help wanted Extra attention is needed good first issue Good for newcomers feature request labels Apr 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant