Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unfolding sometimes results into concatenated channels #627

Open
markusMM opened this issue Jun 30, 2022 · 3 comments
Open

Unfolding sometimes results into concatenated channels #627

markusMM opened this issue Jun 30, 2022 · 3 comments

Comments

@markusMM
Copy link

markusMM commented Jun 30, 2022

unfolded = torch.nn.functional.unfold(

image
/We can see how torch 1.10.2 does concatenates the windows of all channels after unfold./

The expected behavior, in the code, would be to handle (batch, chans, win_size) per chunk
$\rightarrow$ (batch, chans, win_size, n_chunks).

Thus it has to be reshaped before handling to the NN, from my perspective.
unfolded = unfolded.reshape(batch, channels, self.window_size, -1)

@mpariente
Copy link
Collaborator

Thanks for the issue, that's very informative !
Did you search in PyTorch's changelog if they note this change, or not ? Do you think it's intended behavior, or a bug. Has it been fixed in the new versions ?

@markusMM
Copy link
Author

markusMM commented Jul 5, 2022

So, right now there are two simple ways of unfolding a tensor.
The nn.Unfold (and its functional wrapper) will always do the behaviour above on the latest versions (since v0.4.1)
And the builtin torch.tensor.unfold, which always unfolds a specified dimension and outputs size(..., nWindows, winSize).
This seems to be the better solution to avoid the reshape:
unfolded = frame.unfold(-1, stride, window_size)

cheers

@mpariente
Copy link
Collaborator

Thanks for the explanation @markusMM

Could you submit a PR to fix the problem please ? 🙃 Thanks in advance !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants