Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VAE enable annealing teacher forcing probability during training #7

Open
lilleswing opened this issue Dec 4, 2018 · 11 comments
Open

Comments

@lilleswing
Copy link
Contributor

The VAE doesn't have teacher forcing. The teacher forcing is really needed for larger molecules.

Original Code
https://github.com/aspuru-guzik-group/chemical_vae/blob/master/chemvae/tgru_k2_gpu.py

Moses Code
https://github.com/molecularsets/moses/blob/master/moses/vae/model.py#L114-L147

@lilleswing
Copy link
Contributor Author

@danpol
Copy link
Collaborator

danpol commented Dec 4, 2018

@lilleswing, the code from MOSES you've sent implements teacher forcing. Did you mean that we should add free running for training?

@lilleswing
Copy link
Contributor Author

Yes I misread the code.
It is missing annealing off the teacher forcing (but that was not a component of the initial paper). The initial paper did always have teacher forcing during training and free running during sampling. It would be an improvement above the paper implementation.

@lilleswing lilleswing changed the title Teacher Forcing in VAE VAE enable annealing teacher forcing probability during training Dec 5, 2018
@danpol
Copy link
Collaborator

danpol commented Dec 5, 2018

Yes, we’ll add free run soon. It will probably be denoted as a separate model at the metrics table.

@liujunhongznn
Copy link

have you ever tested the reconstruction accuracy of VAE model? I tested the reconstruction accuracy and the performance is very bad, here is my testing code, is there any problem? thanks!
`def read_smiles_csv(path):
return pd.read_csv(path, usecols=['SMILES'], squeeze=True).astype(str).tolist()

if name == 'main':

parser = get_parser()
config = parser.parse_known_args()[0]
device = torch.device(config.device)

if device.type.startswith('cuda'):
    torch.cuda.set_device(device.index or 0)

model_config = torch.load(config.config_save)
model_vocab = torch.load(config.vocab_save)
model_state = torch.load(config.model_save)

model = VAE(model_vocab, model_config)
model.load_state_dict(model_state)
model = model.to(device)
model.eval()

test_data_path = 'train.csv'
test_data = random.sample(read_smiles_csv(test_data_path), 100)
NUM_DEC = 500
num = 0

for ech in tqdm(test_data):
    tensors = [model.string2tensor(ech.strip().strip("\n"), device=device)]
    z_vecs, _ = model.forward_encoder(tensors)
    res_lst = []
    for i in tqdm(range(NUM_DEC)):
        res = model.sample(n_batch=z_vecs.size(0), z=z_vecs)
        res_lst.extend(res)
    if ech in res:
        num += 1
    print("recons num: ", num)
print("reconstruct acc: ", num*1.0/100)`

@danpol
Copy link
Collaborator

danpol commented Apr 14, 2020

Hi, @liujunhongznn
Hi!

Low reconstruction quality is due to the posterior collapse that frequently happens in VAEs. Since the goal of MOSES is to produce the generative distribution as good as possible, the posterior collapse is acceptable for this task. If you want to obtain meaningful latent codes, try reducing KL divergence weight.

@bokertof
Copy link

bokertof commented Jun 2, 2020

@danpol Hello! Can you help me with VAE because I'm mixed up. As you before-mentioned this VAE implementation does use Teacher Forcing approach, but I don't see any loops with decoder (except val mode for generation of SMILES). Am I right that it's literally training with teacher forcing = 1? Because we don't pass previous predicted tokens (like in seq2seq models)

@danpol
Copy link
Collaborator

danpol commented Jun 3, 2020

Hi, @bokertof! VAE in MOSES uses teacher forcing—we pass the correct token, not the sampled one.

@bokertof
Copy link

bokertof commented Jun 3, 2020

@danpol Ok, I got it. Can you tell me what the reason not to use the sampled tokens as input? I'm trying to implement similar net and faced an issue when model with feeding of previously predicted tokens doesn't learn whatsoever.

@danpol
Copy link
Collaborator

danpol commented Jun 3, 2020

If you feed sampled tokens, you have to propagate the gradient through sampling (e.g., with REINFORCE), which has notoriously high variance. You could use variance reduction techniques, but it lies far from the notion of a "baseline".

@bokertof
Copy link

bokertof commented Jun 3, 2020

Thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants