Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NLL for q0-q2 is 0 but for q3 is >2 #46

Open
mainpyp opened this issue Apr 21, 2023 · 5 comments
Open

NLL for q0-q2 is 0 but for q3 is >2 #46

mainpyp opened this issue Apr 21, 2023 · 5 comments

Comments

@mainpyp
Copy link

mainpyp commented Apr 21, 2023

Hey,
its me again! :D
My loss, mse and in the example down below the nll is dropping very fast and even reaches 0.

Coming from my previous issue (#45), this means that the recovered token embeddings for timessteps [0, n * 0.25) [n * 0.25, n * 0.5) [n * 0.5, n * 0.75) with n being the number of diffusion steps are the exact same for each timestamp.
However the nll for q3 ([n * 0.75, n]) is very high (>2) which is also reflected when generating sequences for the respective checkpoints. Did you encounter something similar during your training process? 😊

Bildschirm­foto 2023-04-21 um 12 29 08

@summmeer
Copy link
Collaborator

Hi,
I didn't encounter such a situation before. Which datasets did you use? It seems that $x_t$ with a larger sampled $t$ didn't get sufficient training.

@mainpyp
Copy link
Author

mainpyp commented Apr 27, 2023

Thanks! I am using a protein sequence dataset with 100K training sequences and 6K diffusion steps however the same change of metrics happen when I use 2K.
Do you mean with $x_t$ with a larger sampled $t$ didn't get sufficient training that the last 25% of the denoising steps are not yet trained properly however the previous denoising steps are already "done"?
If yes, can I influence that without changing the previous denoising steps because I also noticed that the q2 nll goes up again after 1K iterations, whereas the q0/1 nll stays the same and the q3 is still gradually going down.

Bildschirm­foto 2023-04-27 um 11 31 20

@summmeer
Copy link
Collaborator

Do you mean with $x_t$ with a larger sampled $t$ didn't get sufficient training_ that the last 25% of the denoising steps are not yet trained properly however the previous denoising steps are already "done"?

Yes. This pattern is not observed in the text sequence, not sure about the protein sequence. Maybe you can edit the time sampler to add more weight on q2 steps.

@mainpyp
Copy link
Author

mainpyp commented May 3, 2023

Thanks again :)
Two follow ups to your answer:

sampler to add more weight on q2 steps.

  1. Do you mean to adjust the fixed sampler so that t's that are in q2 are weighted higher?

  2. What is the expected behavior of the the different metrics for different qn's? I am currently running DiffuSeq on a reduced Conversation dataset and there, all metrics of a type are in the same ballpark for each qn. Is that also what you have observed?

@summmeer
Copy link
Collaborator

Respond to 1: Yes
Respond to 2: Yes, the specific number of q_n is computed after re-weighting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants