Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shouldn't we train pretrain_D? #10

Open
Smellly opened this issue May 5, 2018 · 2 comments
Open

Shouldn't we train pretrain_D? #10

Smellly opened this issue May 5, 2018 · 2 comments

Comments

@Smellly
Copy link

Smellly commented May 5, 2018

Plz tell us when do we set D_is_pretrain as True ?
And npz_paths = ["mscoco_51000.npz"] at

npz_paths = ["mscoco_51000.npz"]

is created by what code?

@tsenghungchen
Copy link
Owner

We found that the results are similar whether D (critic) is pre-trained or not.
The code is therefore not released here.
For Multi-modal Critic, it simply takes three classes of paired, unpaired and sentences generated from the pre-trained G to train the classifier.
In Domain Critic, sentences from source and target domains are used.

@tsenghungchen
Copy link
Owner

tsenghungchen commented May 7, 2018

If you're interested, here is the npz file of negative samples from MSCOCO model at ckpt-51000.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants