Skip to content

Computing entropy of tensor networks containing classical probability distributions #195

Closed Answered by jcmgray
apiedrafTNO asked this question in Q&A
Discussion options

You must be logged in to vote

So for classical tensor networks, I don't know of any really scalable methods, but you can push a bit further by contracting the output chunks lazily and computing $\sum p \log p$ on each of them separately - I added a cotengra example here https://cotengra.readthedocs.io/en/latest/examples/ex_large_output_lazy.html that shows this for a size $2^{36}$ marginal.

Beyond that probably sampling or using a replica trick might be possible, but nothing non-trivial comes to mind!

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@apiedrafTNO
Comment options

@jcmgray
Comment options

Answer selected by apiedrafTNO
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants