You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fromcaptum.attrimportSaliency
...... #load test sample, label, and modelsample=sample.requires_grad_(True) #a Tensor with the size of [1,50,50]Grad=Saliency(model)
attributions=Grad.attribution(sample,target=label)
attributions=torch.abs(attribution)
min_attr=torch.min(attribution)
max_attr=torch.max(attribution)
attributions= (attributions-min_attr)/(max_attr-min_attr)
y_mesh, x_mesh=torch.meshgrid(torch.arange(50, dtype=torch.float64), torch.arange(50, dtype=torch.float64))
mass_center=torch.stack([torch.sum(attributions*x_mesh)/(50*50),torch.sum(attributions*y_mesh)/(50*50)])
mass_center_loss=-torch.sum((mass_center-mass_center0)**2) #mass_center0 is a consanttemp=-torch.autograd.grad(mass_center_loss, sample)[0]
I would like to calculate the gradient of mass_center_loss with respect to the input sample,
but it was an error: One of the differentiated Tensors appears to not have been used in graph.
I guess this might be the reason why the sample doesn't record the gradient in captum fuction.
So what can I do to preserve the gradient information of the samples in the calculation of attribution?
The text was updated successfully, but these errors were encountered:
Hello,
My code is as follows:
I would like to calculate the gradient of mass_center_loss with respect to the input sample,
but it was an error: One of the differentiated Tensors appears to not have been used in graph.
I guess this might be the reason why the sample doesn't record the gradient in captum fuction.
So what can I do to preserve the gradient information of the samples in the calculation of attribution?
The text was updated successfully, but these errors were encountered: