Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

overhead when applied to semantic segmentation #7

Open
baibaidj opened this issue Jun 2, 2021 · 1 comment
Open

overhead when applied to semantic segmentation #7

baibaidj opened this issue Jun 2, 2021 · 1 comment

Comments

@baibaidj
Copy link

baibaidj commented Jun 2, 2021

Hi. It's exciting to see this great work.
I was exploring the possibility in applying the MCR in segmentation task.
One way is to treat every pixel as a sample and group all pixels into the batch dimension to compute the two losses, discriminative and compressive.
However, the coding rate operation is O(n^2) where n stands for the number of samples in a mini-batch. And, the pixels in an image may amount to from thousands to millions (2d to 3d images), this operation may exceed the capacity of a usual commercial GPU.
I was wondering if you would study in this direction and what suggestion do you have.
Many thanks.

@ryanchankh
Copy link
Owner

ryanchankh commented Jun 16, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants