Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All nan in matrix come from non negative tucker decomposition #516

Open
hahia opened this issue Jul 30, 2023 · 2 comments
Open

All nan in matrix come from non negative tucker decomposition #516

hahia opened this issue Jul 30, 2023 · 2 comments

Comments

@hahia
Copy link

hahia commented Jul 30, 2023

Code:

from tensorly.decomposition import non_negative_tucker
tl.set_backend('pytorch')
tensor = tl.tensor(dat, device='cuda:3')
facs_overall = non_negative_tucker(tensor,rank=[8,8, 8],random_state = 2337)

dat is
image
image
image

The facs_overall
image
image

Could someone provide an explanation for this?

@JeanKossaifi
Copy link
Member

What's your input tensor? Can you produce a minimal snippet of code to reproduce your error? It seems your tensor is very sparse / could it also be low-rank?

It also seems the range might be problematic - I think that's also link with another thing I want to add in tensorLy: optional normalization of the input tensor (and unnormalizing directly in factorized form) @cohenjer @aarmey

As a small test, could you try i) lower rank and ii) normalizing your tensor (e.g. remove mean and divide by std).

@cohenjer
Copy link
Contributor

cohenjer commented Aug 2, 2023

Just to complete @JeanKossaifi's great answer, you can easily normalize a tensor T using T = T/tl.norm(T) but centering would make some values negative in the tensor so I would not do it in the context of nonnegative decomposition.

Something bad which may also occur is if whole slices of the tensor are zero. non_negative_tucker is based on multiplicative updates, you may be dividing by zero at some point in the algorithm. There is a safeguard (a small epsilon 1e-12 is added) but maybe with your precision this epsilon is too small and considered 0?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants