Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to compute gradients of the attributions with respect to sample #1221

Open
cys515 opened this issue Dec 11, 2023 · 0 comments

Comments

@cys515
Copy link

cys515 commented Dec 11, 2023

Hello,

My code is as follows:

from captum.attr import Saliency
...... #load test sample, label, and model
sample = sample.requires_grad_(True)  #a Tensor with the size of [1,50,50]
Grad = Saliency(model)
attributions = Grad.attribution(sample,target=label)
attributions = torch.abs(attribution)
min_attr = torch.min(attribution)
max_attr = torch.max(attribution)
attributions = (attributions-min_attr)/(max_attr-min_attr)
y_mesh, x_mesh = torch.meshgrid(torch.arange(50, dtype=torch.float64), torch.arange(50, dtype=torch.float64)) 
mass_center = torch.stack([torch.sum(attributions*x_mesh)/(50*50),torch.sum(attributions*y_mesh)/(50*50)])
mass_center_loss = -torch.sum((mass_center-mass_center0)**2)  #mass_center0 is a consant
temp = -torch.autograd.grad(mass_center_loss, sample)[0]

I would like to calculate the gradient of mass_center_loss with respect to the input sample,
but it was an error: One of the differentiated Tensors appears to not have been used in graph.
I guess this might be the reason why the sample doesn't record the gradient in captum fuction.
So what can I do to preserve the gradient information of the samples in the calculation of attribution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant