Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raises error in continuous entropy #11

Open
un-lock-me opened this issue Jun 30, 2019 · 3 comments
Open

raises error in continuous entropy #11

un-lock-me opened this issue Jun 30, 2019 · 3 comments

Comments

@un-lock-me
Copy link

Hi,

Thanks for sharing you work. I want to use the continuous entropy of your project in mine.

I have a matrice like this:

x =  tf.Variable(   [   [0.96,    -0.65,    0.99,    -0.1   ],
                        [0.97,    0.33,    0.25  ,    0.05  ],
                        [0.9,     0.001,    0.009,    0.33  ],
                        [-0.60,   -0.1,    -0.3,     -0.5   ],
                        [0.49,    -0.8,     -0.05,   -0.0036],
                        [0.0  ,   -0.45,    0.087,    0.023 ],
                        [0.3,     -0.23,    0.82,    -0.28  ]])

When I apply the ee.entropy, I receive this error:

    rev = 1/ee.entropy(row)
  File "/home/sgnbx/Downloads/NPEET/npeet/entropy_estimators.py", line 21, in entropy
    assert k <= len(x) - 1, "Set k smaller than num. samples - 1"
TypeError: object of type 'Tensor' has no len() 

This is my code:


def rev_entropy(x):
    def row_entropy(row):
        rev = 1/ee.entropy(row)
        return rev
    rev= tf.map_fn(row_entropy, x, dtype=tf.float32)
    return rev

x =  tf.Variable(   [   [0.96,    -0.65,    0.99,    -0.1   ],
                        [0.97,    0.33,    0.25  ,    0.05  ],
                        [0.9,     0.001,    0.009,    0.33  ],
                        [-0.60,   -0.1,    -0.3,     -0.5   ],
                        [0.49,    -0.8,     -0.05,   -0.0036],
                        [0.0  ,   -0.45,    0.087,    0.023 ],
                        [0.3,     -0.23,    0.82,    -0.28  ]])

p = (x + tf.abs(x)) / 2
ent_p = rev_entropy(p)

Can you please explain how can I know the `k` here?
print(ent_p)
@gregversteeg
Copy link
Owner

Sorry to take so long responding. "k" is one of the arguments for ee.entropy.
def entropy(x, k=3, base=2):

The default is 3, but you clearly have more than 3 samples. I'm guessing the issue has to do with using a tensorflow tensor. I don't think it will work, for a variety of reasons. The most fundamental one is that the k nearest neighbor library that I use in numpy probably won't work on tensors.

I'd be interested if you try x = np.array( same matrix ) whether it works.

@un-lock-me
Copy link
Author

Thank you so much for getting back to me with this issue.

Actually I will be able to convert the code to tensorflow so that it can work on tensors.

The only issue I have is that, I need to calculate the entropy over each row, say I need to know what is the entropy of a single row in my tensor.

Taking this to account, and looking at most of the implementations for the KNN on continuous entropy, they consider the whole matrix.

Now I got stuck in as I dont know how to change the code to apply knn only one row by one row.
In terms of the coding, I did it but it raises error as it compare each row with other rows(please correct me if Im wrong).

I appreciate it if you could share your idea regarding this with me.

Thanks~

@gregversteeg
Copy link
Owner

I'm not sure if it's relevant, but I have seen entropy estimators based on pairwise distances that can be implemented as differentiable expressions in tensorflow. For instance, Eq. 10 of this paper: https://arxiv.org/pdf/1705.02436.pdf. I think Artemy has a few papers discussing that "mixture of Gaussians" estimator for entropy and mutual information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants