Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do Aggregate on a Graph whose nodes are all vectors #176

Open
SchenbergZY opened this issue Mar 15, 2023 · 6 comments
Open

How to do Aggregate on a Graph whose nodes are all vectors #176

SchenbergZY opened this issue Mar 15, 2023 · 6 comments
Labels
question Further information is requested

Comments

@SchenbergZY
Copy link

From the demo and the wiki the input data to the Aggregate layer is 3-channel pictures. But I would like to know if the input data is (BsxNxd) and the Adjacent matrix is (BsxNxN), how to use this Aggregate layer to output (BsxNxd1), where N is the number of nodes in a graph and d1 is the new feature dimension?

@romanngg
Copy link
Contributor

Aggregate only does (weighted) sum-pooling, so indeed the output will be of same shape B, N, d, and it has no trainable parameters. To change channel size, add a stax.Dense(d1) layer afterwards; note that it works with any input shapes/dimensions (equivalent to 1x1[...x1] convolution), you just need to specify the channel_axis (https://neural-tangents.readthedocs.io/en/latest/_autosummary/neural_tangents.stax.Dense.html). Alternatively, you can also use stax.Conv or stax.ConvLocal layers. Lmk if this helps!

@romanngg romanngg added the question Further information is requested label Mar 15, 2023
@SchenbergZY
Copy link
Author

SchenbergZY commented Mar 16, 2023

Aggregate only does (weighted) sum-pooling, so indeed the output will be of same shape B, N, d, and it has no trainable parameters. To change channel size, add a stax.Dense(d1) layer afterwards; note that it works with any input shapes/dimensions (equivalent to 1x1[...x1] convolution), you just need to specify the channel_axis (https://neural-tangents.readthedocs.io/en/latest/_autosummary/neural_tangents.stax.Dense.html). Alternatively, you can also use stax.Conv or stax.ConvLocal layers. Lmk if this helps!

Thank you for your reply!
But here I still have questions about channel-axis: why this channel-axis exists? In the origin paper I could not find anything about channel axis. Compared to the origin code by the paper author, how can I ignore this channel-axis parameter?

@romanngg
Copy link
Contributor

Which paper/code do you refer to? Channel axis is just the axis that contains your channels / hidden units / features, it's the last axis (-1, or 2) in your example of size d or d1. It's very ubiquitous in all standard deep learning layers. It is commonly the last axis (-1), but you have the flexibility to specify it to be elsewhere by setting the channel_axis parameter.

@SchenbergZY
Copy link
Author

SchenbergZY commented Mar 16, 2023

Which paper/code do you refer to? Channel axis is just the axis that contains your channels / hidden units / features, it's the last axis (-1, or 2) in your example of size d or d1. It's very ubiquitous in all standard deep learning layers. It is commonly the last axis (-1), but you have the flexibility to specify it to be elsewhere by setting the channel_axis parameter.

In paper arxiv:1905.13192 (which you have quoted in your Aggregiate wiki), their example datasets contains no feature dimensions. Only just scalar nodes and their neighbours are contained.
In arxiv:2103.03113, GCNTK is specified but no public code provided.
I need to make GCNTK by myself.
Maybe I can set channel_axis=1 for GCNTK construction, axis=-1 as for my d and d1. Not sure if this is correct.
Still thank you for your answering.

@romanngg
Copy link
Contributor

Thanks for clarifying! If your input has no channel_axis, I'd suggest adding a singleton channel_axis of size d=1, something like x = jnp.expand_dims(x, channel_axis), so it will be of size B, N, 1 (and then you can use channel_axis=-1 by default). Lmk if this works for you!

@yCobanoglu
Copy link

Check my repo on transductive Node Classification/ Regression using Graph Neural Network Gaussian Processes and Graph Neural Tangent Kernel using the Neural Tangents Library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants