You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Explainability is a key feature of GNNs, which is already implemented in PyG. However, of all the features introduced, only a few have been adapted to heterogeneous graphs.
Algorithms: Of all the algorithms implemented for explainability, only the Captum algorithm is compatible with heterogeneous graphs. It would be interesting to adapt other specific algorithms for graphs such as GNNExplainer or PGExplainer, and other algorithms such as AttentionExplainer.
Moreover, the algorithms adapted by PyG could be extended with new algorithms that have been published over the years (for example, see this survey), but this isn't just for heterogeneous graphs. Maybe in the future, we could work on creating new algorithms simultaneously for heterogeneous graphs, without this gap.
Adaptation of GNNExplainer for heterogeneous graphs.
Adaptation of PGExplainer for heterogeneous graphs.
Adaptation of AttentionExplainer for heterogeneous graphs.
Implementation of new explainability algorithms. System to allow adaptation to heterogeneous graphs without additional work.
Features: Some features available in the explanations of homogeneous GNNs are missing in heterogeneous GNNs. For example, the visualize_graph method of Explanation is not available for HeteroExplanation. This right now can be done with the get_explanation_subgraph method, and generating the plot by hand with NetworkX, but it would be nice to implement it to do it automatically.
Implementation of missing features for HeteroExplanation, such as visualize_graph.
Metrics: Currently, the available metrics such as Fidelity or Faithfulness are only available for homogeneous graphs, but those metrics could be adapted for heterogeneous graphs. To continue the work of #5628, we could think implementing new metrics for all kind of graphs, such as sparsity or stability (e.g. see https://arxiv.org/pdf/2012.15445.pdf)
Adaptation of fidelity, faithfulness and characterization score for heterogeneous graphs.
Sparsity metric for all graphs.
Stability metric for all graphs.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
As a heads up, #8512 seeks to make using Captum possible for some specific heterogeneous convolution layers for which the current implementation doesn't quite work (HANConv and HGTConv). I'm planning to update the PR to the current main branch soon. We might need to have a similar approach for the other explainers (special implementations due to the differing ways HANConv and HGTConv initialize their edge indices when propagating)
I've also been toying around with implementing a heterogeneous version of GNNExplainer and PGExplainer based on DGL's own heterogeneous implementations of these, though I haven't been totally successful yet (and I think there are a few things we can improve in terms of their implementation).
I think the most important feature is to support GNNExplainer for heterogeneous graphs. Hopefully it shouldn't be hard to do since we just need to make sure that edge_mask and node_mask are created for every edge/node type.
馃殌 The feature, motivation and pitch
Explainability is a key feature of GNNs, which is already implemented in PyG. However, of all the features introduced, only a few have been adapted to heterogeneous graphs.
Algorithms: Of all the algorithms implemented for explainability, only the Captum algorithm is compatible with heterogeneous graphs. It would be interesting to adapt other specific algorithms for graphs such as
GNNExplainer
orPGExplainer
, and other algorithms such asAttentionExplainer
.Moreover, the algorithms adapted by PyG could be extended with new algorithms that have been published over the years (for example, see this survey), but this isn't just for heterogeneous graphs. Maybe in the future, we could work on creating new algorithms simultaneously for heterogeneous graphs, without this gap.
GNNExplainer
for heterogeneous graphs.PGExplainer
for heterogeneous graphs.AttentionExplainer
for heterogeneous graphs.Features: Some features available in the explanations of homogeneous GNNs are missing in heterogeneous GNNs. For example, the
visualize_graph
method ofExplanation
is not available forHeteroExplanation
. This right now can be done with theget_explanation_subgraph
method, and generating the plot by hand with NetworkX, but it would be nice to implement it to do it automatically.visualize_graph
.Metrics: Currently, the available metrics such as Fidelity or Faithfulness are only available for homogeneous graphs, but those metrics could be adapted for heterogeneous graphs. To continue the work of #5628, we could think implementing new metrics for all kind of graphs, such as sparsity or stability (e.g. see https://arxiv.org/pdf/2012.15445.pdf)
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: