Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minimization of expectation value #227

Open
ValentinKasper opened this issue Apr 23, 2024 · 3 comments
Open

minimization of expectation value #227

ValentinKasper opened this issue Apr 23, 2024 · 3 comments

Comments

@ValentinKasper
Copy link

ValentinKasper commented Apr 23, 2024

minimization of expectation value

The following minimal example

import quimb as qu
import quimb.tensor as qtn

L = 3
Z = qu.pauli('X')

bond_dim = 4
mps = qtn.MPS_rand_state(L, bond_dim, cyclic=True)

def normalize_state(psi):
    return psi / (psi.H @ psi) ** 0.5

def expectation_val(psi):
    return - (psi.H @ psi.gate(Z,1)) ** 2  

optmzr = qtn.TNOptimizer(
    mps,                                
    loss_fn=expectation_val,
    norm_fn=normalize_state,
    autodiff_backend='torch',      
    optimizer='L-BFGS-B',               
)

mps_opt = optmzr.optimize(100) 

leads to the error

TypeError: tensordot(): argument 'other' (position 2) must be Tensor, not numpy.ndarray

Unforturnately, I am unable to track it down. Can you please help? Thanks a lot

@jcmgray
Copy link
Owner

jcmgray commented Apr 23, 2024

The problem is just that Z is a numpy array whereas the tensors during the optimization are torch tensors, which are not compatible.

The 'proper' way to handle this is to add Z as a argument of your function rather than leave it as a closure, then supply loss_constants={"Z": Z}, which lets quimb know which things need to be converted to whichever backend.

You could also just convert Z to a torch tensor yourself, if you are always going to use the torch backend!

@ValentinKasper
Copy link
Author

Thank you so much for your fast reply, if I understand you correctly you suggest:

import quimb as qu
import quimb.tensor as qtn

L = 3
Z = qu.pauli('Z')

bond_dim = 4
mps = qtn.MPS_rand_state(L, bond_dim, cyclic=True)

def normalize_state(psi):
    return psi / (psi.H @ psi) ** 0.5

def expectation_val(psi, Z):
    return - (psi.H @ psi.gate(Z,1)) ** 2  

optmzr = qtn.TNOptimizer(
    mps,                                
    loss_fn=expectation_val,
    norm_fn=normalize_state,
    loss_constants={"Z": Z},
    autodiff_backend='torch',      
    optimizer='L-BFGS-B',               
)

mps_opt = optmzr.optimize(100) 

This leads to the error

RuntimeError: both inputs should have same dtype

I understand that qu.pauli('Z') is a numpy array. I will try out the pure pytorch solution you suggest as well.

Let me know your comments.

@jcmgray
Copy link
Owner

jcmgray commented Apr 30, 2024

Hi @ValentinKasper, yes for torch you just need to make the arrays in all the tensors the same dtype, (or explicitly cast them when necessary). If your hamiltonian is real, then you can just supply e.g. qu.pauli('z', dtype="float64"). If complex, you would instead change the dtype of the TN, though the loss should always be real, meaning you might have to take the real part explicitly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants