Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting between Deeprobust and pytorch geometric #118

Open
akul-goyal opened this issue Sep 6, 2022 · 9 comments
Open

Converting between Deeprobust and pytorch geometric #118

akul-goyal opened this issue Sep 6, 2022 · 9 comments

Comments

@akul-goyal
Copy link

akul-goyal commented Sep 6, 2022

Hi,

I am trying to attack my own custom model using the topology attack listed graph/global_attack. The attack runs fine until here where I have a modified adj matrix that I am looking to pass into my pytorch geometric model. Given it is in an adj matrix format rather than the edge_index format that pytorch geometric uses, how can I convert my adj_matrix to a format that can be used by pytorch without losing any of the gradients that are later needed for the backprop on line. I tried doing adj_matrix.nonzero() but that gets rid of the gradients.

More simply put, can you attack GAT in deeprobust using the topology attack?

@ChandlerBang
Copy link
Collaborator

Hi,

The current code does not support attacking GAT using the topology attack. You may refer to the PRBCD attack to implement the attack with PyG.

However, we will soon include PRBCD in deeprobust based on PyG (maybe in one month). Please stay tuned :)

@akul-goyal
Copy link
Author

Hi @ChandlerBang,

Thanks for the quick response. Is it possible to provide some intuition on how PRBCD is attacking PYG? In the current format of deep robust, a dense matrix gets modified. When I convert between deep robust to PYG the gradients of the dense matrix are lost. How can you preserve this? Furthermore, I am using a neighborsampler for pyg so is there a way to pass in the dense matrix to the neighbor sampler without losing the gradients?

@ChandlerBang
Copy link
Collaborator

Hey, you may take a look at their paper first. Basically, at each step they will sample a block of edge indices (a small portion of all the possible edge indices) and optimize the edge weights by gradient descent.

@YaningJia
Copy link

@akul-goyal Hi
I meet the same quetions. I'm trying to implement topology on gat, first, i use pyg to do it and use the same means as you, then to avoid the probleam of adj, i use pytorch, but that both get rid of the gradients. do you solve it?

@akul-goyal
Copy link
Author

Yes, I used the following for help! pyg-team/pytorch_geometric#1511

@YaningJia
Copy link

Thanks a lot

@YaningJia
Copy link

@akul-goyal Hi
I'm sorry to bother you again. According to the link you provided, I do following modifications:
edge_index = adj.nonzero().t()
row, col = edge_index
edge_weight = adj[row, col]

and it solves the problems of vanishing gradients. And I use modified_adj as input as gat. But the test accuracy of modified_adj is not apprent decline. I wonder if your topology attack is valid. And can you show me some codes about your modification if available? Thank you.

@akul-goyal
Copy link
Author

Hey, I am not sure if I understand you correctly, but based on your code, nonzero() does not preserve gradients. So that may be the problem,

@YaningJia
Copy link

YaningJia commented Oct 31, 2022

The aims of nonzeros is convert dense adj to edge_index which is suit for pyg. So if i use pyg, what should i do?
Here is the code.
The victim model is GATConv is provided by pyg.
for t in tqdm(range(epochs)):
# update victim model
victim_model.train()
modified_adj = self.get_modified_adj(ori_adj)
adj_norm = utils.normalize_adj_tensor(modified_adj)
edge_index = (adj_norm > 0).nonzero().t()
row, col = edge_index
edge_weight = adj_norm[row, col]
output = victim_model(ori_features, edge_index, edge_weight)
loss = self._loss(output[idx_train], labels[idx_train])

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        # generate pgd attack
        victim_model.eval()
        modified_adj = self.get_modified_adj(ori_adj)
        adj_norm = utils.normalize_adj_tensor(modified_adj)
        edge_index = (adj_norm > 0).nonzero().t()
        row, col = edge_index
        edge_weight = adj_norm[row, col]
        output = victim_model(ori_features, edge_index, edge_weight)
        loss = self._loss(output[idx_train], labels[idx_train])
        adj_grad = torch.autograd.grad(loss, self.adj_changes)[0]

could you relpy me if you konw the probleams in your free time. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants