Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Stochastic encoder #51

Open
chuchen2017 opened this issue Mar 22, 2023 · 2 comments
Open

About Stochastic encoder #51

chuchen2017 opened this issue Mar 22, 2023 · 2 comments

Comments

@chuchen2017
Copy link

Thanks for your excellent work! It is very inspiring!
I have a question about Stochastic encoder.
Equation 8 in the paper is described as the reverse of equation 1. Equation 8 uses the U-Net ϵθ(xt, t, z) trained in training process to generate x_t+1 from x_t. However, as far as i can see, the ϵθ was trained for denoising from x_t to generate x_t-1.
More specificly, ϵθ is used to predict noise that already exist in x_t, why Stochastic encoder uses the noise that is predicted to be exist currently by ϵθ to map the picture to latent space?
Thanks for your answering!

@phizaz
Copy link
Owner

phizaz commented Mar 22, 2023

Equation 8 uses the U-Net ϵθ(xt, t, z) trained in training process to generate x_t+1 from x_t. However, as far as i can see, the ϵθ was trained for denoising from x_t to generate x_t-1.

Let me first say that the U-Net predicts the noise within the image, which can be thought of as a direction of change from $x_t$ to $x_0$ (I mean $x_0$, not $x_{t-1}$).
However, your intuition is not wrong that Eq 8 is between $x_{t+1}$ from $x_t$ but Eq 1 is between $x_{t-1}$ from $x_t$. How could both use the same direction from the same model?
Thinking of the limit where $\Delta t \rightarrow 0$, the changes from $x_t$ to $x_{t+1}$ and from $x_{t-1}$ to $x_t$ are actually described by the same direction! This is how you obtain Eq 8.

More specificly, ϵθ is used to predict noise that already exist in x_t, why Stochastic encoder uses the noise that is predicted to be exist currently by ϵθ to map the picture to latent space?

I'm not clear about this question. In general, the stochastic encoder turns image $x_0$ into a specific noise map $x_T$ such that the render of that noise gives back the same initial image. It's fitting that the stochastic encoder would incrementally turn $x_0$ into $\epsilon$ (which is what $x_T$ is).

@chuchen2017
Copy link
Author

Thanks for your reply!
I understand what Stochastic encoder are trying to do in this model.
But I'm still confused by the arugment that changing direction of x_t to x_t+1 is same with that of x_t-1 to x_t.
Any mathmatical provement can be provided to illustrate the process? Or can you provide me with any other papers used the same process you might have refered while doing your work.
I am deeply grateful for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants