You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently attempting to use LRP to propagate relevance through a ResNet50 and I noticed that the implementation of the $LRP_{\alpha,\beta}$ rule for $\alpha=1$ and $\beta=0$ seems incorrect.
Indeed, if I look at the formula given in https://iphome.hhi.de/samek/pdf/MonXAI19.pdf, the rule is $$R_j=\sum_k \dfrac{(a_jw_{jk})^+}{\sum_j (a_jw_{jk})^+}R_k$$
However, by only removing the negative weights in the implementation of the class Alpha1_Beta0_Rule, the resulting rule in Captum seems to be $$R_j=\sum_k \dfrac{a_j(w_{jk})^+}{\sum_j a_j(w_{jk})^+}R_k$$
In practice, and similar to the implementation proposed here, I believe that both positive AND negative weights should be split in order to compute the value $(a_jw_{jk})^+$ as $$(a_jw_{jk})^+ = a_j^+w_{jk}^+ + a_j^-w_{jk}^-$$
However, I'm not sure that the current Captum class PropagationRule really allows this sort of shenanigans as it rely mostly on forward and backward hooks.
Anyway, I would love to have the opinion of the developers on this subject. Have a nice day :)
The text was updated successfully, but these errors were encountered:
❓ Questions and Help
I am currently attempting to use LRP to propagate relevance through a ResNet50 and I noticed that the implementation of the$LRP_{\alpha,\beta}$ rule for $\alpha=1$ and $\beta=0$ seems incorrect.$$R_j=\sum_k \dfrac{(a_jw_{jk})^+}{\sum_j (a_jw_{jk})^+}R_k$$
$$R_j=\sum_k \dfrac{a_j(w_{jk})^+}{\sum_j a_j(w_{jk})^+}R_k$$ $(a_jw_{jk})^+$ as $$(a_jw_{jk})^+ = a_j^+w_{jk}^+ + a_j^-w_{jk}^-$$
Indeed, if I look at the formula given in https://iphome.hhi.de/samek/pdf/MonXAI19.pdf, the rule is
However, by only removing the negative weights in the implementation of the class Alpha1_Beta0_Rule, the resulting rule in Captum seems to be
In practice, and similar to the implementation proposed here, I believe that both positive AND negative weights should be split in order to compute the value
However, I'm not sure that the current Captum class PropagationRule really allows this sort of shenanigans as it rely mostly on forward and backward hooks.
Anyway, I would love to have the opinion of the developers on this subject. Have a nice day :)
The text was updated successfully, but these errors were encountered: