Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about fast point transformer #17

Open
net-F opened this issue Jan 4, 2024 · 0 comments
Open

Questions about fast point transformer #17

net-F opened this issue Jan 4, 2024 · 0 comments

Comments

@net-F
Copy link

net-F commented Jan 4, 2024

Thanks for your amazing work, and I have few questions about the implemention of this architecture. Thank you for any answers.

  1. In the code of LightweightSelfAttentionLayer, why the inter postion embeding is initialized as a learnable random variable ?

self.inter_pos_enc = nn.Parameter(torch.FloatTensor(self.kernel_volume, self.num_heads, self.attn_channels))
nn.init.normal_(self.inter_pos_enc, 0, 1)

According to Fig 3 in the paper, shouldn't it be obtained from the coordinate difference between the current voxel and neighboring voxels ?

  1. How many specific neighboring voxels are indexed in LightweightSelfAttentionLayer ?Is the number of neighboring voxels determined by kernel_size in the input parameter? Is the neighboring voxels the valid voxels contained in the kernel ?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant