Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on 1D sequence functionality of DiffAttack #18

Open
Nikos-86 opened this issue Apr 5, 2024 · 3 comments
Open

Question on 1D sequence functionality of DiffAttack #18

Nikos-86 opened this issue Apr 5, 2024 · 3 comments

Comments

@Nikos-86
Copy link

Nikos-86 commented Apr 5, 2024

Hello,

I am interested in using your DiffAttack on 1D sequences with the aim to make them adversarial against 1D Neural Net classifier (for the specific type of sequences ). I Have a few questions , if you can spare some time.

  1. Will the DiffAttack work in 1D setting? I noticed that it is set up to be used with images as a principle. Are there any modifications that need to be done beforehand? What do you suggest?

  2. The cross attention and self attention are attributes of the Stable diffusion (as I understand it). Given the fact that it might be difficult to find an already trained Stable diffusion on the specific 1D sequences, what do you think it might be a good approach instead? Will it be able to perform the attack in any sort of modification? What are your thoughts on this?

This is more of a discussion, not an issue per se. We could continue this discussion offline if you want. Let me know!

Appreciate your time.

@WindVChen
Copy link
Owner

Hi @Nikos-86,

Thank you for reaching out. While I haven't extensively explored manipulating 1D data, I'll share my thoughts to hopefully bring some reference.

In theory, DiffAttack should be adaptable to 1D settings, but not every aspect of its design may seamlessly translate. As you observed, certain designs within DiffAttack rely on cross-attention and self-attention maps. Therefore, if you intend to adapt these designs to 1D data, you may first need to locate a pretrained 1D diffusion model that incorporates these attention modules. The model's structure need not strictly align with the Stable Diffusion, but rather it's crucial to assess whether similar phenomena, as shown in our Figure 3 (potentially represented in 1D language as relationships among word tokens rather than image pixels), are present in the cross- and self-attention maps of 1D data. If similar phenomena are observed, it's plausible that these designs related to attention maps in DiffAttack can be incorporated into a 1D context. As for specific implementation details or modifications from 2D to 1D, I'm not too clued up on 1D data, so I can't offer much advice there, sorry.

Regarding the availability of a diffusion model with both cross- and self-attention modules in the 1D domain, I cannot definitively comment on the prevalence of cross-attention modules in the 1D field. However, self-attention modules are commonly utilized, and I believe the self-attention module in a 1D context should still capture contextual relationships, similar to its function in Section 3.4 of our paper. The loss function outlined in Equation 5 should be readily adaptable to 1D data.

Even in scenarios where both cross- and self-attention modules are absent in the 1D diffusion model, the diffusion model itself should still facilitate the construction of adversarial data, as demonstrated in Appendix N of our paper v2.

I am available for further discussion. Feel free to reach out via email.

Hope this provides some assistance.

@Nikos-86
Copy link
Author

Nikos-86 commented Apr 8, 2024

Thank you for your reply @WindVChen . I have sent you an e-mail as per your suggestion there. Thank you!

@Nikos-86
Copy link
Author

Nikos-86 commented Apr 8, 2024

Hi again @WindVChen ! Please let me know if you have received my e-mail. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants