Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loading multiple models into memory at the same time #6

Open
j6mes opened this issue Apr 16, 2019 · 0 comments
Open

Loading multiple models into memory at the same time #6

j6mes opened this issue Apr 16, 2019 · 0 comments

Comments

@j6mes
Copy link

j6mes commented Apr 16, 2019

Hi (making this a github issue by Daniil's request)

I've been working on making an interactive version of this system so that it can be queried via a web API. I'm having some issues with the pipeline as the sentence ensemble currently only loads one model into memory at a time which means that querying my API takes about 20 minutes to swap between all the models.

I've been working in this file: https://github.com/j6mes/fever-athene-system/blob/master/src/athene/system.py

Is it possible to load all the models into memory at once? I get issues with tensorflow variable scoping which makes it difficult to do: I was thinking of doing something very simple with the following changes:

Original

    selection_model = SentenceESIM(h_max_length=sargs.c_max_length, s_max_length=sargs.s_max_length, learning_rate=sargs.learning_rate,
                       batch_size=sargs.batch_size, num_epoch=sargs.num_epoch, model_store_dir=sargs.sentence_model,
                       embedding=sentence_loader.embed, word_dict=sentence_loader.word_dict, dropout_rate=sargs.dropout_rate,
                       num_units=sargs.num_lstm_units, share_rnn=False, activation=tf.nn.tanh)

Changed

    selection_models = [SentenceESIM(h_max_length=sargs.c_max_length, s_max_length=sargs.s_max_length, learning_rate=sargs.learning_rate,
                       batch_size=sargs.batch_size, num_epoch=sargs.num_epoch, model_store_dir=sargs.sentence_model,
                       embedding=sentence_loader.embed, word_dict=sentence_loader.word_dict, dropout_rate=sargs.dropout_rate,
                       num_units=sargs.num_lstm_units, share_rnn=False, activation=tf.nn.tanh)] * args.num_model

    for i, model in enumerate(selection_models):
            logger.info("Restore sentence model {}".format(i))
            model_store_path = os.path.join(args.sentence_model, "model{}".format(i + 1))
            selection_model.restore_model(os.path.join(model_store_path, "best_model.ckpt"))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant